PenTest OPEN eBook

January 29, 2017 | Author: Tetuan Azlan | Category: N/A
Share Embed Donate


Short Description

Download PenTest OPEN eBook...

Description

Trainings for Pentester

II/251

Trainings for Pentester

Contents

III/251

Chapter 1: Auditing Cisco Routers and Switches 7 Introduction8 Functions of a Router, Architectures and Components  8 Modes of Operation 8 Configuration Files and States 8 How a Router Can Play a Role in your Security Infrastructure  9 Router Technology, a TCP/IP Perspective  9 Understanding the Auditing Issues with Routers  9 Password Management 10 Sample Router Architectures in Corporate WANs  12 Router Audit Tool (RAT) and Nipper 17 RAT17 Nipper28 Security Access Controls Performed by a Router  34 Security of the Router Itself and Auditing for Router Integrity  34 Identifying Security Vulnerabilities  36 Audit Steps over Routers  36 Show access-lists 36 Sample Commands 37 Cisco router check lists 38 Chapter 2: An Introduction to Network Audit 39 Introduction40 What is a Vulnerability Assessment?  40 The importance of Vulnerability Assessments  40 A Survey of Vulnerability Assessment Tools  40 Network Mapping 41 Pre-Mapping Tasks  41 What the Hackers Want to Know  44 Auditing Perimeter Defenses  44 Auditing Routers, Switches and other network infrastructure 45 The Methodology 46 Protection Testing? 49 Miscellaneous Tests 49 Network and Vulnerability Scanning 50 Nessus51 Essential Net Tools (EST) 61 CIS – Cerberus Internet Scanner 61 Summary62 Chapter 3: GPEN Study Guide 63 Outcome Statement 64 Key Questions 65 Section 1: Pen-testing process 66 Outcome Statement 66 Key Questions 68 Section 2: Legal Issues 68 Outcome Statement 68 Key Questions 70 Section 3: Reconnaissance 71 Outcome Statement 71 Introduction71 Inventory71 Whois71 Web Reconnaissance 72 Metadata72 DNS73 Key Questions 73 Exercises74 Section 4: Intro to Linux 74 Outcome Statement 74

Shell History 75 Basic UNIX commands  75 The Essential Commands 76 File Commands 76 Finding out about other users on the system 79 Authentication and Validation  80 Usernames, UIDS, the Superuser  82 File System Access Control  82 Section 5: Scanning Goals and Techniques 92 Outcome Statement: 92 Introduction92 Key Questions 98 Section 6: Network Tracing, Scanning, and Nmap 98 Outcome Statement 98 Network Tracing using Traceroute 98 Traceroute100 Port Scanning Fundamentals 102 Port Scanning with NMAP 103 Amap Scanner 109 Key Questions: 109 Section 7: Vulnerability Scanning 111 Outcome Statement: 111 Introduction111 Section 8: Enumerating Users 117 Outcome Statement 117 Methods of Acquisition 117 Unix/Linux Accounts 117 Windows Accounts 118 Key Questions: 119 Section 9: Netcat and Hping 120 Outcome Statement 120 Netcat120 Hping121 Key Questions: 122 Section 10: Exploitation 123 Outcome Statement: 123 Why Exploitation? 123 Exploitation Categories 123 Exploitation123 Metasploit127 Section 11: Command Shell vs. Terminal Access 133 Outcome Statement 133 Command Shell vs. Terminal Access 133 Windows Targets 133 Linux Targets 137 Relays138 Key Questions: 139 Exercises:139 Section 12: Remote command execution 152 Outcome Statement: 152 Section 13: Password Attacks 154 Outcome Statement: 154 Password Attacks: Motivation and Definitions 154 Password Attack Tips 155 Dealing with Account Lockout 156 Password Guessing with THC-Hydra 156 Password Attacks 158 Obtaining Password Hashes – Windows 160 Linux and Unix Password Schemes 161 Key Questions: 162 John the Ripper 163 Cain168 Rainbow Table Attacks 169

Ophcrack Exercise 171 Pass-the-Hash Attacks 171 When to use which password attack? 171 Section 14: Wireless Fundamentals 172 Outcome Statement 172 Exercises174 Cloaked ESSIDs 175 Locating Access Points 175 Wireless Client Attacks 175 Traffic Injection 175 Airpwn175 Session Hijacking 176 Access Point Impersonation 176 Karma177 Karma Metasploit Integration 177 Key Questions: 177 Exercises:178 Section 15: Web Application Overview 178 Injection Attacks 185 Cross Site Request Forgery (XSRF) Attacks 186 Cross Site Scripting Attacks 186 Command Injection 187 SQL Injection 188 Blind SQL Injection 191 Key Questions: 191 Chapter 4: 100+ Unix Commands 193 Abstract194 Introduction and objectives 194 Basic UNIX commands  195 The Essential Commands 197 Authentication and Validation  202 File System Access Control  204 Restricting Superuser Access  207 Finer Points of Find  208 Finding out About the System Configuration  211 What Tools to Use  212 Password Assessment Tools  212 Controlling Services  218 Enabling .rhosts  218 Kernel Tuning for Security  220 Security and the cron System  222 Backups and Archives  223 Logging  224 Tricks and Techniques  225 Appendixes240 “uname”244 Command Summary 246 About the Author 253

Trainings for Pentester

VI/251

A

s a result of fruitful collaboration between Dr. Wright and PenTest Magazine, eForensics Magazine and Hakin9 Magazine, we are proud to present this publication which is a comprehensive compendium of knowledge on four general subjects: • Network Audit • Unix Testing • Auditing Cisco - Routers & Switches • Pentesting Dr. Craig Wright, multi-talented man, reaching heights both academically and professionally is the author of this publication. Dr. Craig Wright runs a number of training courses, academic lectures. He is also a contributor of various substantive articles for papers and magazines. Dr Craig Wright is a lecturer and researcher at Charles Sturt University (IT Security, Digital Forensics) and Executive Vice – President (strategy) of CSCSS (Centre for Strategic Cyberspace+ Security Science) with a focus on collaborating government bodies in securing cyber systems. With over 20 years of IT-related experience, he is a sought-after public speaker both locally and internationally, training Australian and international government departments in Cyber Warfare and Cyber Defence, while also presenting his latest research findings at academic conferences. In addition to his security engagements, Craig continues to contribute with IT security related articles and books. Dr Wright holds the following industry certifications: GSE CISSP, CISA, CISM, CCE, GCFA, GLEG, GREM and GSPA. He has numerous degrees in various fields including a Master’s degree in Statistics, and a Master’s Degree in Law specializing in International Commercial Law. Craig is working on his second doctorate, a PhD on the Quantification of Information Systems Risk and plans to work on new PhD in second half of the year in are of Juridical Studies. On the 250 pages of the magazine you will find a cross knowledge of Network Audit, Auditing Cisco Routers and Switches, Unix Testing and penetration testing study guide. This knowledge will serve as a comprehensive material for independent work and study at home, as well as a source of information and innovative solutions. I wish you, Dear Readers, an interesting reading! Sincerely, Katarzyna Zwierowicz

Trainings for Pentester

Chapter 3: GPEN Study Guide Pen-testing Foundations Pen-testing process Legal Issues Reconnaissance Intro to Linux Scanning Goals and Techniques Network Tracing, Scanning, and Nmap Vulnerability Scanning Enumerating Users Netcat and Hping Exploitation Command Shell vs. Terminal Access Remote command execution Password Attacks Wireless Fundamentals Web Application Overview

63/251

Trainings for Pentester

64/251

Outcome Statement In this section I will demonstrate an understanding of the fundamental conceptsassociated with pen-testing. Core Topics: • Terminology • Purpose of Pen Testing • Types of Tests and Limitations • Methodologies Terminology Before delving into the world of penetration testing, it is important to make sure that everyone is “speaking the same language”. To that end we are going to start by identifying and defining some key terms that will be frequently used in this study guide: Threat: A threat is an actor or agent that may want to or actually can cause harm to the target organization. Threats include organized crime, spyware companies, and disgruntled internal employees who start attacking their employer. Vulnerability: A vulnerability is some flaw in our environment that an attacker could use to cause damage. Vulnerabilities could exist in numerous arenas in our environments, including our architectural design, business processes, deployed software, and system configurations. Risk: Risk is where threat and vulnerability overlap. That is, we have a risk when our systems have a vulnerability that a given threat can attack. Exploit: An exploit is the vehicle by which the attacker uses a vulnerability to cause damage to the target system. The exploit could be a package of code which generates packets that overflow a buffer in software running on the target. Alternatively, the exploit could be a social engineering scheme whereby the bad guy talks a user into revealing sensitive information, such as a password, over the phone. Ethical Hacking: Ethical Hacking is the process of using computer attack techniques to find security flaws with the permission of the target owner and the goal of improving the target’s security. Penetration Testing: Penetration testing is a more narrowly focused phrase than ethical hacking, dealing with process of finding flaws in a target environment with the goal of penetrating systems, taking control of them. Vulnerability Assessments: Vulnerability assessments are focused on finding vulnerabilities in a system or network, often without regard to actually exploiting them and getting in. The Purpose of Pen-Testing The purpose of pen-testing is simple: to find security flaws before the bad guys do. After applying their security policies, procedures, and technology, organizations can use thorough penetration tests to see how effective their security really is in light of an actual attack, albeit by friendly attackers. An added benefit of ethical hacking and penetration testing is that, because they show real vulnerabilities and indicate what a malicious attacker might be capable of achieving, they can get management’s attention. Decision makers, when presented with the carefully formulated results of a test in business terms, are more likely to provide resources and attention to improve the security stance of an organization. A major goal of penetration testing and ethical hacking is discovering flaws so that they can be remediated (by applying patches, reconfiguring systems, altering the architecture, changing processes, etc.). However, it is important to note that in most tests, not all of the discovered vulnerabilities are actually addressed. A common recommendation is that all high-risk vulnerabilities be addressed in a timely fashion, but the truth is that some vulnerabilities linger long after a test is complete, even high-risk issues. Remember, information security is all about managing risk, not eliminating it. Types of Tests and Limitations There are numerous kinds of ethical hacking and penetration tests. They include:

Trainings for Pentester

65/251

• Network services test: This involves finding target systems on the network, looking for openings in their underlying operating systems and available network services, and then exploiting them remotely. This is generally the most common type of test. • Client-side test: This kind of test is designed to find vulnerabilities in and exploit client-side software, such as browsers, media players, or document editing programs. • Web application test: These tests look for security vulnerabilities in the web-based applications deployed on the target environment. • Remote dial-up war dial: These tests look for modems in a target environment, and often involve password guessing to login to systems connected to discovered modems. • Wireless security test: These tests involve exploring a target’s physical environment to find unauthorized wireless access points or authorized wireless access points with security weaknesses. • Social engineering test: This type of test involves attempting to dupe a user into revealing sensitive information such as a password. Although penetration testing and ethical hacking are useful practices, they do have some limitations. First off, testing projects by their very nature have a limited scope. Most organizations don’t (and can’t) test everything, due to resource constraints. We test those elements of our infrastructure that are deemed most vital. But, a real-world attacker may find flaws in other areas that simply weren’t part of our testing project’s scope. Furthermore, penetration testers and ethical hackers often have constrained access to the target environment that models where some, but not all, of the bad guys sit. Also, because of the risk of crashing a target system during a test, some particular attack vectors will likely be off the table for a professional penetration tester or ethical hacker. Finally, most professional penetration testing is limited by the current known exploits available publicly. Most professional penetration testers and ethical hackers do not write their own exploits, but instead rely on exploits written by others. Even for those testers who do write exploits, there often isn’t enough time to create a custom exploit for a newly discovered flaw in a given target environment. Methodologies Several organizations have released high-quality, free penetration testing and ethical hacking methodology documents. It’s a good idea to review each of these free documents, as they provide useful insights into testing from various different perspectives. Four of the best free documents on testing methodologies include: • Open Source Security Testing Methodology Manual (available at www.isecom.org/osstmm) • NIST Special Publication 800-115: Technical Guide to Information Security Testing and Assessment (available at csrc.nist.gov/publications/nistpubs/800-115/SP800-115.pdf) • Open Web Application Security Project (OWASP) Testing Guide (available at www.owasp.org) • Penetration Testing Framework (available at www.vulnerabilityassessment.co.uk)

Key Questions 1. What is the difference between a risk, vulnerability, and a threat? 2. What is the difference between a penetration test and a vulnerability assessment? 3. What is the primary purpose of a penetration test? 4. What type of penetration test involves trying to find flaws in web browsers or media players and exploiting them? 5. What are some of the major limitations of penetration testing?

Trainings for Pentester

66/251

Section 1: Pen-testing process Outcome Statement In this section I will demonstrate an understanding of the pen-testing processand the importance of reporting. Core Topics: • Permission and Liability • Rules of Engagement • Scoping • Reporting The overall penetration testing process involves three phases: preparation, testing, and conclusion. Preparation During the preparation phase, the parties participating in the test may sign a non-disclosure agreement, especially if the test is conducted by a third-party organization. Then, the testers and the target personnel discuss the most significant concerns of the target organization. We also agree on rules of engagement that describe how the testing will occur. Testing The testing phase is exactly what it sounds like; this is where the testing takes place. There are distinct sub-phases in the testing process that will be covered later in the material. Conclusion The conclusion phase, while often overlooked, is extremely critical. This is where you summarize your findings in a report and provide them to the client. It’s important to ensure that the data is not only available to the client but usable by them so they can develop a plan to fix any issues you identified during the pen-test. Permission and Liability Before doing anything else you need to get official, written permission to conduct the test, even if it is against targets in your own organization. This permission should notify the personnel associated with the target systems that there is some danger of their systems being crashed or impaired by the testing. One of the best ways to do this is via the use of a permission memo (aka a “Get Out of Jail Free Card”). A sample memo can be found at http://www.counterhack. net/permission_memo.html. Have your legal team review, tweak, and approve this language. Then, print it on corporate letterhead and have a chief information officer or similar level of management sign off on it. While the permission memo is acceptable for employees testing their employers, it is not, by itself, suitable as a vehicle for a penetration testing company to test their customers’ environments. It can act as a starting point for that more comprehensive document. But such third-party penetration testing agreement for client networks must also include a limitation of liability agreement and a contract. These items should be drawn up by a lawyer associated with either the penetration testing company or the client. Most penetration testing companies include a limitation of liability agreement that caps the liability for any problems associated with the project at the price of the project itself. To help address this issue further, most penetration testing companies also carry liability and errors and omissions insurance in addition to getting the limitation of liability agreement. Again, this is generally only necessary for third party companies and is generally not necessary for in-house penetration testing. Rules of Engagement Rules of Engagement, a set of practices that must be defined before a penetration test or ethical hacking project can begin. Both the people responsible for the target environment and the testing team must agree on these rules. Without proper Rules of Engagement agreed upon in advance, a penetration test or ethical hacking project could go seriously awry, resulting in devastating consequences for the target organization and the testers. The Get Out of Jail Free Card, limitation of liability agreement, and insurance help protect the testers legally. But, these documents must be shored up with a carefully decided rules of engagement memo that is documented in advance. Some testers define the rules of engagement with a client before they devise a detailed scope of the test. That way, the

Trainings for Pentester

67/251

target organization can have in mind the way the test will be conducted to help make decisions about what is in and out of scope. Others reverse the flow, defining the scope before agreeing to rules of engagement, so that they know what they will test and can then craft the rules of engagement around the given test targets. Either approach is acceptable – defining the rules of engagement first followed by scoping, or scoping the project first and then defining rules of engagement. The important point is that both issues be covered in advance. Scoping The scoping process determines what should be tested and what should not be tested. In addition to determining individual target systems and networks, this scoping process will also look at some types of testing that may or may not be in scope. To start out a scoping conversation, ask members of the target organization about their biggest security concerns. Determining the primary concerns up front can help narrow the focus of a test. Discuss threats, risks, and already-known vulnerabilities with the target organization’s representatives. It is vital to focus this conversation to determine exactly what needs to be tested. The last thing a tester wants is a blurry scope that could lead to scope creep. With scope creep, a misunderstanding of what should be tested leads the target organization to add more systems, target networks, target types, and types of testing to the test as it progresses, a dangerous and costly proposition for a tester. One of the most important elements to include in the project scope is a succinct statement of what is to be tested. Spell out explicitly those domain names, network address ranges, individual hosts, and particular applications that are included in the test. Also, if there are particularly sensitive elements of the organization that should not be tested, explicitly spell out that list of off-limits machines. If any third-party owned or managed systems are included in the scope, make sure to get written permission from these parties before the test begins. The target organization is responsible for getting this permission, and the testers are responsible for making sure the target organization does this. Beyond what should be tested, the scope should specify the level of testing that should occur. Will the test merely be a network scan for targets and vulnerabilities or should the testers go further and actually penetrate the target systems, getting access the targets if possible? If penetration is allowed, should it focus on listening network services, or will clientside software exploitation be allowed as well? Will the test include any application-level or client-side web component testing? What about physical attacks, social engineering or denial of service attacks? Your Rules of Engagement should specify each element on this list that is included in the scope. Reporting Testers should always create a final report describing their work and findings. If the testers work for a third-party penetration testing or ethical hacking company, the report is really the only evidence they leave behind of the project they performed. A report is concrete proof that your organization is exercising its due diligence in conducting vulnerability scans on a regular basis. If there are major security problems in the future (such as a major breach), and your organization is investigated by the government or shareholders, the vulnerability scanning reports will be helpful in showing your attempts to measure your security stance in the past. Penetration testing, vulnerability assessment, or ethical hacking reports will generally include these elements: Executive Summary: This brief up-front matter is meant for executives who may not read the full report, providing them the most important conclusions from the work. Probably the most important part of the report Introduction: This component describes the project at a high level, answering the who, where, when, and why aspects of the project. Methodology: This part of the report describes the “what” of the project – what did the team do? It covers the process of the penetration test or ethical hacking engagement. Findings: This section presents the actual findings, listed one by one, in the target environment with detailed technical descriptions. The findings are sorted so that the most significant risk issues are discussed first. Conclusions: This last section summarizes the project results, and is very reminiscent of the Executive Summary.

Trainings for Pentester

68/251

Key Questions 1. What are the three phases of the penetration testing process? 2. Why is it important to ensure you get official permission in writing before conducting a penetration test? 3. Why might a permission memo not be sufficient for a third party testing company? What else may be needed? 4. What are rules of engagement as they apply to penetration testing? Why are they important? 5. What are some key differences between the rules of engagement and the scope of a penetration test? 6. What is the purpose of having a scope of a penetration test? What are some consequences of not having one? 7. What is probably the most important part of a penetration testing report? 8. What are the key sections of a penetration testing report? 9. Why is having a thorough report important?

Section 2: Legal Issues Outcome Statement In this section I will demonstrate an understanding of the legal issues thatsurround pen-testing. Core Topics: • US Laws • Canadian Laws • UK Laws • German Laws • Australian Laws • Japanese Laws • Singapore Laws Many countries have instituted laws for dealing with crimes committed using a computer, so-called “cybercrime” laws. Not all countries have such laws, and indeed, attackers sometimes move to countries or operate through countries without such laws or where cybercrime laws are not enforced. As penetration testers and ethical hackers, we want to make sure we carefully adhere to the laws of the countries in which we operate. Your Permission Memo (the “Get Out of Jail Free Card”) is a very helpful thing in assuring that you have permission of the target organization that owns the systems you will test. Still, beyond that memo, we still need to have a feel for various countries’ laws so that we can make sure we follow them. It is important to note that the tester not only has to follow the laws where he or she is located, but also the laws in the country where the target machines are located. US Laws Some of the most important US cybercrime laws include: • Title 18, Section 1030: This is the major law in the US under which most cybercrime cases are prosecuted. Originally known as the Computer Fraud and Abuse Act, this law protects federal interest computer systems from being attacked.

Trainings for Pentester

69/251

• The Cyber Security Enhancement Act of 2002: This law was designed to modernize cybercrime legislation in the United States, particularly issues associated with terrorism. This law allows for fines or imprisonment for any term of years or for life, or both. • Title 18, Section 1362 addresses malicious injury or destruction of communications equipment, such as radio, telegraph, and telephone or cable equipment operated or controlled by the United States or associated with US military or civil defense. Penalties include fines and imprisonment for up to 10 years. • Title 18, Section 2510 and its subsequent sections govern interception of electronic • Communication, prohibiting interception of such information without explicit permission. The law includes exemptions for people who operate the network itself, so long as their actions are associated with protecting the network and keeping it running • Title 18, Section 2701: This law protects stored electronic information and prohibits access to stored communications with penalties of fines and imprisonment ranging between one year and ten years. Exceptions are granted for the service provider, as well as the legitimate, intended recipient of the information. Canadian Laws Two laws dominate Canadian law from a cybercrime perspective: CC184 is called Interception of Communications, and deals with unauthorized monitoring of private communications. It prohibits the unauthorized interception of such communications regardless of the form. The law carves out a series of exceptions. If the originator or the recipient authorizes the monitoring, it does not violate the law. Penalties for violating this law include up to five years imprisonment. The second major law is CC 342: Unauthorized Use of Computer. This law criminalizes the use of computers for fraudulent activities, including: • Obtaining computer service without authorization • Intercepting any function of a computer system • Using a computer system with intent to commit an offence defined in other laws • Using, possessing, or trafficking in passwords used to commit an offense • Penalties for violating this section range up to ten years of imprisonment. UK Laws In the United Kingdom, the dominant cybercrime law is the “Computer Misuse Act of 1990.” This law specifically says that, “A person is guilty of an offence if: • He causes a computer to perform any function with intent to secure access to any program or data held in any computer • The access he intends to secure is unauthorized; and • He knows at the time when he causes the computer to perform the function that that is the case.” These specific conditions apply regardless of the particular program or data the perpetrator accesses. German Laws German cybercrime laws are contained in the penal code, specifically in Sections 202 and 303. Section 202a is associated with Data Espionage, and states: “Whoever, without authorization, obtains data for himself or another, which was not intended for him and was specially protected against unauthorized access, shall be punished with imprisonment for not more than three years or a fine.” Section 202c, passed in 2007, is very controversial, sometimes referred to as the German “Anti-Hacking” Law by its detractors. It defines as a criminal offense the creation of and distribution of tools used for compromising computers.

Trainings for Pentester

70/251

Many people have interpreted this law to mean that security researchers cannot create or distribute scanning tools, sniffers, and exploitation software in Germany. Section 303a specifically prohibits the unlawful deletion or suppression of data, as well as acts that render it unusable or altered, thus covering both integrity attacks and denial of service attacks. Violations of this law are punishable with up to two years imprisonment or a fine. Section 303b covers the consequences that are associated with interfering with data processing equipment that has substantial significance to other people from a business perspective. Destruction or damage to such systems is prohibited, with violations punishable with up to five years imprisonment or a fine. Australian Laws A dominant law regarding cybercrime in Australia is the Cybercrime Act 2001. This law prohibits unauthorized access to or modification of data, computer systems, and electronic communications, and reserves its harshest penalties for impairment of any of these items. To be guilty of an offence, someone would have to cause unauthorized access to, modification of, or impairment to restricted data, with intent and knowledge that their actions are unauthorized. Furthermore, to classify as an offence, the data associated with the case must be either stored on Commonwealth computers, held on behalf of the Commonwealth, or associated with a telecommunications service. In other words, the law protects government computers and data, as well as systems accessed via public communications networks. Japanese Laws Japan’s cybercrime laws center on Law Number 128 of 1999. This law prohibits acts of unauthorized access to computer systems. Specifically, it includes Article 3, which deals with unauthorized access to computers identified in the following categories: • Entering another person’s access code into a computer via telecommunications lines • Entering information other than an access code that evades access control mechanisms, again via telecommunications lines • Entering information that evades other restrictions on computers via telecommunications lines Penalties, defined in Article 8, include fines ranging up to 300,000 to 500,000 Yen, and imprisonment of up to one year. Singapore Laws The primary cybercrime law in Singapore is known as Chapter 50a, Computer Misuse Act. This Singapore law specifically cites the UK Computer Misuse Act and the Canadian Criminal Law Amendment Act 1985 as references, showing its influences. This law carves out a series of offenses that are closely aligned with various information security models describing the aspects of secure systems, including access control, integrity, confidentiality, availability, and authentication.

Key Questions 1. Why is it important to be familiar with the cybercrime laws of any country you or your company is in or that the network traffic passes through when conducting a penetration test?

2. What are some of the major US cybercrime laws? Which one is considered the major one? 3. What are some of the major Canadian cybercrime laws? 4. What are some of the major UK cybercrime laws? 5. What are some of the major German cybercrime laws? 6. What are some of the major Australian cybercrime laws? 7. What is the major cybercrime law for Japan and what does it specify? 8. What is the major cybercrime law for Singapore and what does it specify?

Trainings for Pentester

71/251

Section 3: Reconnaissance Outcome Statement In this section I will demonstrate an understanding of the basic concepts of reconnaissance and how to obtain basic information during this phase. Core Topics: • Inventory • Whois • Web Reconnaissance • Metadata • DNS • Search Engines

Introduction After the test has been thoroughly scoped and any required agreements are signed, the test begins with the reconnaissance phase. In this phase, the tester gathers information about the target organization from various public sources. The tester needs to become very familiar with the target’s people and culture, learning the specific business terminology used by people in the target organization. This recon phase is extremely important in conducting thorough penetration tests. Don’t dismiss it because it doesn’t get deep into technology. The information gathered during the reconnaissance phase will be helpful throughout all of the other testing phases, and will be instrumental in the development of the final report.

Inventory Throughout the test it is vital that you record your results in an organized fashion. Disorganized penetration testers and ethical hackers are often far less successful. The last thing you want to do is to miss out on a vital vulnerability in a target organization because it was lost in disorganized clutter. One of the most helpful tools for recording results is an inventory of all discovered targets and their associated details. A convenient way to store this inventory is by using a spreadsheet. Each discovered target system gets one line in the inventory, with the details populated as they are discovered throughout the remainder of the test. The inventory includes numerous fields, such as the Target’s IP address, name, operating system type, etc. Some of the most important fields to include are How Discovered known vulnerabilities, and the accounts and passwords that are determined. Note that you might not populate every field for every discovered target. Instead of leaving the field blank, it is a good idea to enter “Not Found” or “Not Applicable”, to at least show that the given field was not overlooked.

Whois To determine more detailed information about a given target domain, we can look it up using various Whois databases distributed around the world. When a domain is registered, the registrar gathers a significant amount of information about the Internet gateway and people associated with the domain. Most registrars put this information in publicly accessible whois databases. Many of these databases, which are organized in a hierarchical fashion, have a web-based front-end so that they can be accessed via a browser. Alternatively, the whois command built into some operating systems can be used to formulate a whois query. There are several websites devoted to getting whois information, each providing a portal that will query various whois servers on the Internet. Some of the most popular include: • www.samspade.org • www.geektools.com • www.whois.net

Trainings for Pentester

72/251

Web Reconnaissance As a start of the recon phase, the tester can use a search engine such as Google to learn more about the target organization. In particular, it is important to conduct searches on the target organization’s name to gather the following information, which should be recorded in the tester’s results: • Major businesses: What is the industry or industries associated with the target? Financial services? Government agency? Manufacturing? • Major products or services: What does the target organization produce? What are the brand names of its products or services? • Corporate officers and other VIPs: Who is most important in the target organization? Who are its leaders? Who is associated with its technical infrastructure? • Physical locations: Where are the major facilities of the target organization? • Recent press releases: What has the target enterprise told the public lately about itself? What do they consider important from an image and marketing perspective? Most organizations have job requisition information available on the Internet, as they look to hire new staff. These job requests often contain detailed information about the technical environment of the enterprise. In addition to search the target site itself you should look for open jobs on various job-hunting sites, like Yahoo’s Hotjobs.com and Monster.com. Both of these sites let you search based on categories of jobs. Other helpful areas to search are social networking sites. People put a significant amount of information about themselves on these sites, often including where they work. That employer information is exactly what we are looking for. By searching within the social networking site for people who work for the target enterprise, we can then focus in on their background and skill set.

Metadata As organizations create documents, the software that they use to create these documents embeds an enormous amount of information in the document files. A good deal of metadata is also included in the file. Much of this metadata is associated with formatting and display of the other data in the file. Besides this formatting metadata, a lot of file creation and editing tools include additional metadata entries that can be very useful for penetration testers during our reconnaissance phase, such as: • User names: Penetration testers often need user names for exploitation and password-guessing attacks • File system paths: Knowing the full path of the original file when it was created can reveal useful tidbits about the target organization • E-mail addresses: This data can be useful if the penetration test scope includes spear phishing tests • Client-side software in use: Given that client-side exploitation is such a common attack vector, it can be helpful to penetration testers to know which client-side programs are in use Almost every document type has some form of metadata, but some are richer in metadata than others. The following types of documents, generated and used by most enterprises, are of particular interest to penetration testers: • pdf files: These files are associated with Acrobat Reader and a variety of other pdf creation and editing tools. • doc/docx, xls/xlsx, and ppt/pptx files: These files are associated with Microsoft Office suite, but are also used by several other related tools. • jpg and jpeg: These image files often contain a significant amount of metadata, including data about the camera used to take a picture, the file system of the machine where the image was edited, and details about the imageediting software.

Trainings for Pentester

73/251

• html and htm: These file types contain web pages, and may at first seem uninteresting. However, their comments and hidden form elements could contain metadata that is very useful to a penetration tester. Additionally, scripts embedded in the HTML may reveal sensitive information or undocumented features of a web application.

DNS The last elements of the whois record include the Domain Name System (DNS) Servers associated with the target organization, listed in the order of primary, secondary, and tertiary (if it exists) DNS servers. Name servers are focused on resolving domain names into IP addresses, but that isn’t their sole function. They also indicate which machines are mail servers for a given domain, among other useful information. DNS servers house a variety of different records, including (but not limited to): • NS: Name server record, which indicates the name servers associated with a given domain. • A: Address record, which maps a domain name into an IP address. • MX: Mail Exchange record, which identifies the mail servers for the given domain. • CNAME: Canonical Name record, which indicates aliases and alternative names for a given host. • SOA: Start of Authority record, which indicates that a server is authoritative for that DNS zone (set of records). • PTR: Pointer for inverse lookups records. Also called a reverse record, indicating an IP address to domain name mapping. Tools such as nslookup (available in both Windows and *nix) and dig can be used to query DNS servers to retrieve some or all types of available DNS records. There are also websites such as www.dnsstuff.com that can be used to perform DNS lookups. Search Engines Another step is to use publicly accessible search engines to look for signs of vulnerabilities on systems. Google, Yahoo, and Microsoft’s Live Search all contain a great deal of information that could indicate the presence of vulnerabilities in systems associated with the target environment. By sending the appropriate queries to the search engines themselves, we may be able to identify vulnerable systems without actually sending any packets to those systems directly. Google has some of the most advanced search queries, including: • site: The “site:” directive allows an attacker to search for pages on just a single site or domain, narrowing down and focusing the search. For example typing “site:Hakin9.com” will return all the web pages associated with the Hakin9.com domain • link: The “link:” directive shows sites that link to a given web site. During recon, this directive can be used to find business partners, suppliers, and customers. • inurl: The “inurl:” directive lets us search for specific terms to be included in the URL of a given site. This can be helpful in finding well-known vulnerable scripts on web servers. For example, searching for inurl:viewtopic.php will find sites with URLs that contain “viewtopic.php” in them. • ext: or filetype: Both of these directives allow you to search for specific file types. You can use this to identify all files of a given type within a domain by combining it with the site: directive. If you wanted to find all Microsoft Word files on the HAKIN9 website for example you could look for site:Hakin9.com filetype:doc. There are many, many other things you can find using search engines such as Google. Things like system configurations, passwords, and confidential information can easily be found using the right directive and search strings. A list of useful Google searches to find vulnerabilities can be found at the Google Hacking Database (Johnny.ihackstuff.com)

Key Questions 1. What are some key things to keep track of about a given target system when conducting a penetration test? 2. When conducting web reconnaissance what are some things to take note of?

Trainings for Pentester

74/251

3. What sorts of key information can be found in file metadata? 4. What are some common types of files that can provide useful information in their metadata? 5. What is the purpose of querying a target network’s DNS server? What sort of information can you get from it? 6. What might you be able to find when using search engines to query publicly available information? How are advanced operators such as inurl: and ext: helpful in allowing you to get more relevant search results?

Exercises 1. Using the tool of your choice perform whois queries on several well-known websites as well as smaller ones (local businesses, for example). What sort of information are you able to get. 2. Go to the website of a company of your choice and explore it. See what information you are able to obtain that could be useful to you if you were conducting a penetration test. Also check with national job sites to see if you can learn anything about that company there. 3. Use a DNS lookup tool such as nslookup or the dnsstuff website to query a DNS server and see what information you can obtain. What did you get that could be useful when conducting a penetration test? 4. Use a search engine such as Google to map out a company’s website. Use directives such as inurl: and ext: to gather information about particular types of applications, files, or client apps in use by the company. Use the link: directive to see what information you can gather about the company from other websites.

Section 4: Intro to Linux Outcome Statement In this section I will have a fundamental understanding of the Linux operating systems. The default command terminal or shell (command line) in most Linux distros is BASH (Bourne Again SHell). In Bash, remember that it is simple to correct an error in typing a command using “CTRL-u”. Performing this action will abandon the entire line resulting in the removal of any input from the buffer (and hence evidence such as the BASH history file). Unlike Windows, Linux operating systems are case-sensitive. In Linux it matters what the case you use. Linux allows commands to be put into “sleep”, killed, sent into the background, and many more options. Some of these are discussed in table 1. This includes a list of keys that can be used in a shell and are listed with their effect. In addition, there are a number of other simple techniques for controlling execution in a Linux terminal shell. These include sending output to the foreground (fg), background (bg or &), redirecting it (> or >>) or piping it to another command (|). Key Sequence

Result

CTRL-j

Line feed. “CTRL-J reset CTRL-J” can act as a reset command on some systems.

CTRL-u

Remove the current line from the command buffer (and do not log it).

CTRL-s

Stop (and restart) a process. This sequence may suspend a command (i.e. pause it). CTRL-s stops the system from sending any further data to the screen until a CTRL-q is pressed.

CTRL-s

CTRL-s and CTRL-q control the flow of output to the terminal. CTRL-q resumes a terminal following the CTRL-s key.

CTRL-z

Use CTRL-Z to stop a job. The command “bg” (background) can be used to put the program into the background (type “bg” on the command line).

Trainings for Pentester

75/251

CTRL-c

Interrupt a program. This is used to “break” a program execution. This signal allows the program to clean up before exiting.

CTRL-d

This is an end-of-input response. Some commands (such as mail) require this. Use with caution as this may log you out of some shells.

CTRL-\

Kills a program (without cleaning up).

CTRL-L

Clears the screen. This is the same as the command “clear”.

CTRL-C

Abandon the current command line and return to the prompt.

TAB

TAB is used to auto-complete the names of directories and files. Hit Tab for the shell to expand a name of a file such that the system expands it to a unique name that matches what you’ve typed so far. If there are multiple items that match what you’ve typed (i.e., there is nothing unique yet), you can hit Tab again to show the names of all files or directories in your current working directory that match what you’ve typed so far.

CTRL-R

This is a history search on recent commands

HOME / END

Use the “HOME” key to go to the start of a command line and the “END” key to jump to the end.

Table 1 BASH shortcuts

Shell History Bash, like many other shells, remembers your shell history, letting you access it by hitting the up and down arrows to access and edit recent commands, which you can re-run by simply hitting enter. Once you’ve chosen a previous command, you can hit the left and right arrow keys to position your cursor to edit the command. Also, bash supports tab auto-complete for the names of directories and files. When accessing something in the file system, just hit Tab for the shell to expand it to a unique name that matches what you’ve typed so far. If there are multiple items that match what you’ve typed (i.e., there is nothing unique yet), you can hit Tab again to show the names of all files or directories in your current working directory that match what you’ve typed so far. That is, Tab expands to a unique value, and Tab-Tab shows all items that match what you’ve typed so far if nothing is unique.

Basic UNIX commands Many of these are not on all UNIX or Linux systems. When connecting to a system for the first time, it is always a good idea to verify the operating system version. Next, use a command such as “which” to validate the existence and path of commands that you intend to use. Where a command is not in your path, the “which” command will not return any information on the command you are checking. This does not mean that the command is not installed on the system; it is just that you do not have it in your path. Offline investigation into the system defaults (there are many good information sources on the Internet) for the particular operating system version is the first step in finding what should be on the system. Remember that Linux systems are often customised by the system administrator, so expect far more variation in path and commands than you will expect to find on a Windows based system. To this end, search tools, file listing tools and text editors will become extremely important in conducting an initial reconnaissance of the system. Before anyone can master the finer arts of bypassing controls and running shell exploits, it is essential that the basics are not just understood, but are mastered. There are a number of key areas and commands that are fundamental to Linux. A failure to master the basics can be the difference in achieving success or being that person found running ‘dir’ on a Solaris host when ‘ls’ was desired. Some of the most fundamental command areas are detailed below:

Trainings for Pentester

76/251

• Account management (useradd, passwd, su, sudo, login, who, whoami, export); • File management (cd, pwd, ls, mount, mkdir, chmod, chgrp, cp, mv, del, find, locate, mcedit, cat, more/less, pg); • Network management (nc, ifconfig, ifdown/ifup, ping, netstat, route, tcpdump, cat /dev/ip, cat /dev/tcp, ssh, ftp, tftp); • Executing programs (PATH, which, ‘./’, ps, jobs, at, cron); • File editing (vi, emacs, mail); • Compression and Compilation (tar, gzip, gunzip, rpm, configure, make); • Scripting and more (perl, C, shell scripts, sed/awk, grep, man, info, shutdown, kill, bg, etc). In conducting a reconnaissance of a *NIX system, it is good practice to ensure that you have the version of the system. It is also important to record the time/date that it is set to.

The Essential Commands The following list provides a quick introduction to the essential commands that you will need to navigate the Linux system. Lists the files in a directory The ‘ls’ command is used to list the files in a directory. This is similar to the Microsoft ‘dir’ command, but far more powerful. As all hardware, memory, etc. are treated as a file in Linux, any device can be accessed as a file – if you know where it is and have the correct permissions. The ‘ls’ command can be pointed at a single file, group of files, the current directory or a specific directory elsewhere in the directory tree. ls –l This is a command to list file entries using the ‘long format’. This information is valuable. It includes the file permissions, the size of the file, the file owner and group and it displays the time when the file was last modified. The last modified time is important to note as a change may quickly alert a vigilant system administrator to a change. ls -a This command option will list all files – even the hidden ones. In Linux, a file is “hidden” similar to a Windows hidden file attribute through having a name that starts with a “.” ls -r The “r” flag instructs the ‘ls’ command to display its output in reverse order. ls –t The “t” flag instructs the ‘ls’ command to display its output in order of the file timestamp. This allows you to quickly find all files that have been changed in a selected period. Used together, these options can help you find all of the files in a directory that have been changed within the time that you have been logged into a host. For example, the command combination, ‘ls –altr | pg’ will output all of the files in the current directory in the order of timestamps starting with the most recently altered or added files and working to the oldest. Further, by piping the ‘pg’ (page) command to ‘ls’ you can see the output a single screen (page) at a time (rather than having this scroll past you faster than you can read it).

File Commands The following is a quick introduction to a number of Linux commands that are used to display or manipulate files.

Trainings for Pentester

77/251

more – less The ‘more’ command displays a file one page at a time. It is similar to the ‘pg’ and ‘less’ commands, with the added benefit that it is available on practically any Linux system. The ‘pg’ and ‘less’ commands may be more powerful, but they are not universally available. Hitting the space bar will scroll to the next page and hitting ‘q’ will quit the command. The option ‘/pattern’ is available on most Linux systems and provides the ability to search for a defined pattern within the file. emacs - vi Both ‘emacs’ and ‘vi’ are text editors. Many people have a clear preference for one or the other of these editors, with ‘emacs’ being far more powerful than ‘vi’. Both of these commands are used to create or edit a file. It is essential to become adept in the use of either command. As ‘vi’ is available on nearly every Linux* system ever built – even most firewalls, it is essential to understand how to use it. Although ‘emacs’ is more powerful in the options it provides, it is commonly removed from many secured systems (such as IDS, gateways and Firewalls). With knowledge of both of these commands you are unlikely to be caught out without access to a text editor when you need one. mv The ‘mv’ command simply moves a file. This command either renames the file to use a different name in the same directory (that is it does not create a copy) or moves it into a different directory. cp The ‘cp’ (copy) command copies a file. It is similar to what the ‘mv’ command does with the difference being that the original file will remain unchanged (other than the file access timestamp which will be updated). rm The ‘rm’ command removes or deletes a file. This command is similar in effect to the Windows ‘del’ command. The ‘-i’ option to the command asks for confirmation. That is, ‘rm –i’ will require you to confirm that you actually wish to delete the file before it actually removes it. Many system administrators will alter their ‘.cshrc’ file so that the default behaviour of the ‘rm’ command is to ask for confirmation. This can be problematic if you are creating scripts based on the users profile. diff The ‘diff’ command compares two files and displays the differences. This is used to find changes to a file. wc The ‘wc’ (word count) command displays the number of lines, words, and characters in a file. chmod options This command will be covered in far greater detail later in the paper. The ‘chmod’ command is used to change the permissions on a Linux file. These permissions include the ability to read, write, and execute a file. Without the correct permissions, you will not be able to view a file, modify it or execute it. date The “date” command is used to either display (to STDOUT) or set the system date and time. Entering the command “date” by itself will list the date and time of the system it is run on. An example of the output of the “date” command is listed below:

Trainings for Pentester

78/251

Thu Dec 18 15:14:21 AEST 2008 The command, “date -s “12/22/2008 17:23:59”” would be used in order to set the system’s date and time to that displayed within the command. The date command tells you a great deal about a system. From this one command, you can gain an understanding as to the level of care that is applied to the host, its location and other information. If the clock skew (the distance from the real time) is large, then the system is likely to receive less than adequate care. In addition, if the clock is inaccurate, it will be simpler to hide an attack in the system logs. The date command will also return the time zone that is configured on the system. You will not always know where a host is located, and this information can aid you in determining where the system resides. It is generally necessary to have access to a privileged account (root) in order to be able to set the system date and time. This will be covered in more detail later in the paper. uname The “uname” command gives a wealth of information to anybody who does not have knowledge of a system. This command is similar to the “ver” command supplied with Microsoft Windows systems (and DOS). This single command provides intelligence as to the following: • The Operating System (O/S) running on the host, • The O/S kernel name and version (which can offer information as to patching practices associated with the host, • The hosts processor type (e.g. i386, i686, sparc, mips) and the hardware platform (e.g. Sun-Fire-280R), and • The O/S or kernel release level. The following example of this command displays the wealth of information returned from this command. $uname -a => Linux linux-09l5 2.6.25.16-0.1-pae #1 SMP 2008-08-21 00:34:25 +0200 i686 i686 i386 GNU/Linux In some Linux variants (this includes AT&T UNIX System V Release 3.0 – such as Solaris), the “setname” command is available and can be used to modify the values that are returned from the “uname” command. As such, it is not possible to trust all of the information that these initial reconnaissance commands return. It is a good start. which The command, ‘which’ is used to find the location (path) of a program file if it exists in the user’s path. If the command being sought is not in the user path but is on the system, which will not return anything. The ‘which’ command prints the full path of the executable (if found) to stdout. This is done by searching through the directories listed in the environment variable PATH belonging to the user. This process updates the access times of the files and directories searched. Directory Commands Directories are used to group files together in a hierarchical structure. mkdir This command creates a new directory cd The ‘cd’ command is used to change your location into a different directory. This allows you to display the file information in the directory with the ‘ls’ command. When a user logs into a Linux system, they will begin in their ‘home directory’.

Trainings for Pentester

79/251

Returning to one’s home directory is as simple as entering the ‘cd’ command without any other argument or options. The command ‘cd ..’ moves your one level higher in the directory hierarchy towards the root or ‘/’ directory. pwd Displays the current directory that you are in. ff The ‘ff’ command is used to find files on a Linux host. The ‘ff –p’ command has the benefit of not needing full name of the command being searched for. This command is similar to the command ‘find’ which is discussed in detail later. grep The ‘grep’ command extracts, displays or simply finds strings within a file. The command performs searches based on regular expressions. There are a number of updated ‘grep’ programs including egrep and fgrep.

Finding out about other users on the system These commands are used to find out about other users on a Linux host. When testing the security of a system covertly (such as when engaged in a penetration test) it is best to stop running commands when the system administrator is watching. w The ‘w’ command displays any user logged into the host and their activity. This is used to determine if a user is ‘idle’ or if they are actively monitoring the system. who The ‘who’ command is used to find which users are logged into the host as well as to display their source address and how they are accessing the host. The command will display if a user is logged into a local tty (more on this later) or is connecting over a remote network connection. finger The ‘finger’ command is rarely used these days (but does come up from time to time on legacy and poorly configured systems). The command provides copious amounts of data about the user who is being “fingered”. This information includes the last time that user read their mail and any log in details. last -1 The ‘last’ command can be used to display the “last” user to have logged on and off the host and their location (remote or local tty). The command will display a log of all recorded logins and log-offs if no options are used. When the option is provided, the command will display all of the user’s log-ins to the system. This is used when profiling a system administrator to discover the usual times that person will be logged into and monitoring a system. whoami This command displays the username that is currently logged into the shell or terminal session. passwd The ‘passwd’ command is used to change your password (not options) or that of another user (if you have permissions to do so).

Trainings for Pentester

80/251

kill PID This command “kills” any processes with the PID (process ID) given as an option. The ‘ps’ command (detailed later) is used to find the PID of a process. This command can be used to stop a monitoring or other security process when testing a system. The ‘root’ user can stop any process, but other users on a host can only stop their own (or their groups) processes by default. du The ‘du’ command displays the disk usage (that is the space used on the drive) associated with the files and directories listed in the command option. df The ‘df’ command is used to disaplay the amount of free and used disk space on the system. This command displays this information for each mounted volume of the host.

Authentication and Validation There are a variety of ways in which a user can authenticate in UNIX. The two primary differences involve authentication to the operating system against authentication to an application. In the case of an application such as a window manager (e.g. X-Window), authentication to the application is in fact authenticating to the operating system itself. Additionally, authentication may be divided into both local and networked authentication. In either case, the same applications may provide access to either the local or remote system. For instance, X-Window may be used both as a local window manager and as a means of accessing a remote UNIX system. Additionally, network access tools such as SSH provide the capability of connecting to a remote host but may also connect to the local machine by connecting to either its advertised IP address or the local host (127.0.0.1) address. The UNIX authentication scheme is based on the /etc/passwd file. PAM (pluggable authentication modules) has extended this functionality and allowed for the integration of many other authentication schemes. Pam was first proposed by Sun Microsystems in 1995 and was integrated into Red Hat Linux the following year. Subsequently, PAM has become the mainstay authentication schema for Linux and many UNIX varieties. PAM has been standardized as a component of the X/Open UNIX standardization process. This resulted in the X/Open Single Sign-on (XSSO) standard. From the assessor’s perspective, PAM However, necessitates a recovery mechanism that needs to be integrated into the operating system in case a difficulty develops in the linker or shared libraries. The assessor also needs to come to an understanding of the complete authentication and authorization methodology deployed on the system. PAM allows for single sign-on across multiple servers. Additionally, there are a large number of plug-ins to PAM that vary in their strength. It is important to assess the overall level of security provided by these and remember that the system is only as secure as the weakest link. The fallback authentication method for any UNIX system lies with the /etc/passwd (password) file. In modern UNIX systems this will be coupled with a shadow file. The password file contains information about the user, the user ID (UID), the group ID (GID), a descriptor which is generally taken by the name, the user’s home directory and the users default shell.

User

Password

UID

GID

Name Home Directory

Shell

root:x:0:0:root:/root:/bin/csh bin:x:1:1:bin:/bin:/bin/sh daemon:x:2:2:daemon:/sbin:/bin/noshell adm:x:3:4:adm:/var/adm:/bin/noshell lp:x:8:6:lp:/var/spool/lpd:/bin/noshell cwright:x:500:50:wheel:/usr/home/csw:/bin/sync Figure 3.1 The /etc/passwd File

Trainings for Pentester

81/251

The user ID and group ID give the system the information needed to match access requirements. The home directory in the password file is the default directory that a user will be sent to in the case of an interactive login. The shell directive sets the initial shell assigned to the user on login. In many cases a user will be able to change directories or initiate an alternative shell, but this at least sets the initial environment. It is important to remember that the password file is generally world readable. In order to correlate user IDs to user names when looking at directory listings and process listings, the system requires that the password file the access to all (at least in read only mode) by all authenticated users. The password field of the /etc/passwd file has a historical origin. Before the password and shadow files were split, hashes were stored in this file. To maintain compatibility, the same format has been used. In modern systems where the password and shadow files are split, an “x” is used to represent that the system has stored the password hashes in an alternative file. If there is a blank space instead of the “x” this represents that the account has no password. The default shell may be a standard interactive shell, a custom script or application designed to limit the functionality of the user or even a false shell designed to restrict the use and stop interactive logins. False shells are generally used in the case of service accounts. This allows the account to login (such as in the case of “lp” for print services) and complete the task it is assigned. Additionally, users may be configured to run an application. A custom script could be configured to start the application allowing the user limited access to the system and to then log the user off the system when they exit the application. It is important for the assessor to check that breakpoints cannot be set allowing an interactive shell. Further, in the case of the application access, it is also important to check that the application does not allow the user to spawn an interactive shell if this is not desired. Either of these flaws may be exploited to gain deeper access to a system. As was mentioned above, the majority of modern Linux systems deploy a shadow file. This file is associated with the password file but unlike the password file should not be accessible (even to read) by the majority of users on the system. The format of this file is: User

Password_Hash

Last_Changed

Password Policy

This allows the system to match the user and other information in the shadow file to the password file. The password is in actuality a password hash. The reason that this should be protected comes to the reason that the file first came into existence. In the early versions of UNIX there were no shadow files. Being that the password file was world readable, a common attack was to copy the password file and use a dictionary to “crack” the password hashes. By splitting the password and shadow file, the password hash is not available to all users and thus it makes it more difficult for a user to attack the system. The password hash function always creates the same number of characters (this may vary from system to system based on the algorithm deployed, such as MD5, DES etc.). Linux systems are characteristically configured to allow zero days between changes and 99,999 days between changes. In effect this means that the password policies are ineffective. The fields that exist in the shadow file are detailed below: • The username, • The password Hash, • The Number of days since 01 Jan 1970 that password was last changed, • The Number of days that must pass before password can be changed, • The Number of days after which password must be changed, • The Number of days before expiration that the user is warned, • The Number of days after expiration that the account is disabled, • The Number of days since 01 Jan 1970 that the account has been disabled. Being that the hash function will always create a password hash of the same length, it is possible to restrict logins by changing the password hash variable in the shadow file. For instance, changing the password hash field to something like “No_login” will create a disabled account. As this string is less than the length of the password hash, no password hash could ever be created matching that string. So in this instance we have created an account that is not disabled but will not allow interactive logins.

Trainings for Pentester

82/251

Many systems also support complex password policies. This information is generally stored in the “password policy” section of the show file. The password policy generally consists of the minimum password age, maximum password age, expiration warning timer, post expiration disable timer, and a count for how many days an account has been disabled. Most system administrators do not know how to interpret the shadow file. As an auditor, knowledge of this information will be valuable. Not only will it allow you to validate password policy information, but it may also help in displaying a level of technical knowledge. When assessing access rights, it is important to look at both how the user logs in and where they log in from. Always consider the question of whether users should be able to log in to the root account directly. Should they be able to do this across the network? Should they authenticate to the system first and then re-authenticate as root (using a tool such as “su” or “SUDO”)? When auditing the system, these are some of the questions that you need to consider. Many UNIX systems control this type of access using the “/etc/securetty” file. This file includes an inventory of all of the” ttys” used by the system. When auditing the system it is important the first collated a list of all locations that would be considered secure enough to sanction the root user to log into from these points. When testing the system verify that only terminals that are physically connected to the server can log into the system as root. Generally, this means that there is either a serial connection to a secure management server or more likely it means allowing connections only from the root console itself. It is also important to note that many services such as SSH have their own configuration files which allow or restrict authentication from root users. It is important to check not only the “/etc/securetty” file but any other related configuration files associated with individual applications. Side note: TTY stands for teletype. Back in the early days of UNIX, one of the standard ways of accessing a terminal was via the teletype service. Although this is one of the many technologies that have faded into obscurity, UNIX was first created in the 1960s and 70s. Many of the terms have come down from those long-distant days.

Usernames, UIDS, the Superuser Root is almost always connected with the global privilege level. In some extraordinary cases (such as special UNIX’es running Mandatory Access Controls) this is not true, but these are rare. The super-user or “root” account (designated universally as UID “0”) includes the capacity to do practically anything on a UNIX system. RBAC (role-based access control) can be implemented to provide for the delegation of administrative tasks (and tools such as “SUDO” or superuser also provide this capability). RBAC provides the ability to create roles. Roles, if configured correctly, greatly limit the need to use the root user privilege. RBAC both limits the use of the “su” command and the number of users who have access to the root account. Tools such as SUDO successfully provide similar types of control, but RBAC is more granular than tools such as SUDO allowing for a far greater number of roles on any individual server. It will come down to the individual situation within any organization as to which particular solution is best.

File System Access Control UNIX file level access controls are both simple and complex. Granting permissions to individual users or in small groups is simple. Difficulties may arise in cases where a system has to provide access to a large number of users or groups. In this situation it is possible for groups to grow in number exponentially. UNIX file permissions are defined for: • Owner • Group • World The owner relates to an individual user. Restrictions on the owner associate file access with an individual. Group access provides the ability to set access restrictions across user groups. UNIX provides a group file (usually “/etc/group”) that contains a list of group memberships. Alternative applications have been developed for larger systems due to the difficulties associated with maintaining large numbers of group associations in a flat file database. The world designation is in effect equivalent to the Windows notion of everybody. UNIX has three main permissions, read, write and execute. In addition there are a number of special permissions that we will discuss below. The read permission provides the capability to read a file or list the contents of a directory. The write permission provides the capability to edit a file, or add or delete a directory entry. The execute permission provides the capability to execute or run an executable file.

Trainings for Pentester

83/251

UNIX also provides for a special capability with the setting of a “sticky bit”. The “sticky bit” protects the files within a public directory that users are required to write to (e.g. the “/tmp” directory). This protection is provided through stopping users from having the capability to delete files that belong to other users which have been created in this public directory. In directories where the “sticky bit” has been set, only the owner of the file, owner of the directory, or the root user has the permissions to delete a file.

Figure 3.2 Unix File Permissions The Unix file permissions are: “r, w, x, t, s, S”. The following table demonstrates the octal format for “r” or read, “w” or write, and “x” or execute. 1. --x execute 2. -w- write 2. -wx

write and execute

3. r-- read 4. r-x

read and execute

5. rw-

read and write

6. rwx

read, write and execute

The first character listed when using symbolic notations to display the file attributes (such as from the output of the “ls -l” command) indicates the file type: - denotes a regular file b denotes a block special file c denotes a character special file d denotes a directory l denotes a symbolic link p denotes a named pipe s denotes a domain socket The three additional permissions mentioned above indicated by changing one of the three “execute” attributes (this is the execute attribute for user, group or world). The following table details the various special setuid and setgid permissions. There is a difference between whether the file special permission is set on an executable or non-executable file. Permission

Class Executable files

Non-executable files

Set User ID (setuid)

User

S

S

Set Group ID (setgid)

Group

s

S

Sticky bit World t T

Trainings for Pentester

84/251

Figure 3.3 Unix File Permissions The following examples provide an insight into symbolic notation: -rwx r-x r-- this permission is associated with a regular file whose user class or owner has full permissions to run, read and write the file. The group has the permissions to read and execute the file. And the world or everyone on the system is allowed to only read the file. crw-r--r-- the symbolic notation here it is associated with a character special file whose user or owner class has both the read and write permissions. The other classes (group and world) only have the read permission. dr-x------ this symbolic notation is associated with a directory whose user or owner class has read and execute permissions. The group and world classes have no permissions. User level access The UNIX file system commonly distinguishes three classifications of users: • Root (or as the account is also called super-user), • Uses with some privilege level, and • All other users. The previous section on access controls showed us how UNIX privileges and the access to files may be granted with access control lists (ACLs). The simplicity of the UNIX privilege system can make it extremely difficult to configure privileges in UNIX. Conversely it also makes them relatively simple to audit. The UNIX directory command, “ls –al” supplies the means to list all files and their attributes. The biggest advantage for an auditor is the capability to use scripting to capture the same information without having to actually visit the host. A baseline audit process may be created using tailored scripts that the audit team can save to a CD or DVD with statically linked binaries. Each time there is a requirement for an audit, the same process can be run. The benefits of this method are twofold. First, subsequent audits require less effort. Next, results of the audit can be compared over time. The initial order can be construed as a baseline and the results compared to future audits to both verify the integrity of the system and to monitor improvements. A further benefit of this method is that a comparison may be run from the tools on the system against the results derived from the tools on the disk. Generally, it would be expected that no variation would result from the execution of either version of the tools. In the event that a Trojan or root kit found its way onto the server, the addition of a simple “diff” command would be invaluable. In the event that the diff command returned no output, it would be likely that no Trojan was on the system (excepting kernel and lower level software). If on the other hand there was a variation in the results, one would instantly know that something was wrong with the system. The primary benefit of any audit control that may be scripted is that it also may be automated. The creation of such a script and the association for a predetermined configuration file for the script economizes the auditors of valuable time allowing them to cover a wider range of systems and provide a more effective service. The selection of what to audit for on a file system will vary from site to site. There are a number of common configuration files associated with each

Trainings for Pentester

85/251

version of UNIX and also a number of files and directories common to any organization. The integration of best practice tools such as those provided (see the Appendixes for further details) by the Centre for Internet Security, SANS, NIST and the US Department of Defense provide a suitable baseline for the creation of an individual system audit checklist. Special permissions that are set for a file or directory on the whole, not by a class. The set user ID, setuid, or SUID permission. When a file for which this permission has been set is executed, the resulting process will presuppose the effective user ID given to the user class. The set group ID, setgid, or SGID permission. When a file for which this permission has been set is executed, the resulting process will presuppose the group ID given to the group class. When setgid is applied to a directory, new files and directories created under that directory will inherit the group from that directory. The default behavior is to use the primary group of the effective user when setting the group of new files and directories. The sticky permission. The characteristic behavior of the sticky bit on executable files allows the kernel to preserve the resulting process image beyond termination. When this is set on a directory, the sticky permission stops users from renaming, moving or deleting contained files owned by users other than themselves, even if they have write permission to the directory. Only the directory owner and superuser are exempt from this. Keep an eye on your prompt. If it’s a “$”, you just aren’t root. If it’s a “#”, you are root. To create a non-root account on the system, as root, type the following: # useradd –d /home/fred fred Changing passwords It is necessary to set a password for a new account before it can be used. Set the password for the user account “jane” by typing: # passwd jane [type account password here] [retype account password to verify] If jane wanted to change his/her own password, jane would type (from the jane account): $ passwd File Compression and Concatenation It is simpler to get files onto or off of a system as a group. It is also quicker (and there will be a lower footprint) in copying a smaller file. This is where compression and concatenation come into effect. There are many compression utilities that are available, but again I have stuck to the simplest and most commonly available. tar The ‘tar’ command creates a “tarball”. This is a concatenation of many files into a single file (that was originally used for a tar archive or backup). The command is also used to extract files. The ‘-c’ option creates a new archive whereas the ‘-x’ option extracts files (that is restores them). The ‘tar’ command can be used on any files or directories, as well as against an entire directory subtree. The command options need not be preceded by the ‘-‘ character (although this is good practice). The long-style options (those that use the ‘--’ format/’syntax) still require full definition.

Trainings for Pentester

86/251

Tarballs can be added to (this is done using the ‘-A’ option) or simply listed (the ‘-t’ option). This command is far more complex than it first seems and it is essential that time be spend reading and comprehending the full extent of the command (‘man tar’ is a start). gzip This command is used to compress a file or a number of files. This reduces the time to copy a group of files over the network and makes a tarball less obvious. Other tools (such as ‘compress’ also exist but are less commonly used). The ‘gzip’ algorithm has one of the highest compression rates of any of the compression command used in Linux. By default, the ‘gzip’ command is used to create compressed files that use the extension ‘.gz’. The extension ‘.tgz’ generally refers to a compressed tarball. gunzip The ‘gunzip’ command uncompresses a file that has been compressed using the ‘gzip’ command (above). gzcat The ‘gzcat’ command is far less common thanthe ‘gzip’ command. Although it may not be found on a target system, you should become familiar wiuth this command as it allow the user to view the contents of a file that has been compressed using ‘gzip’ without decompressing it. In effect, this command is analogous to ‘gunzip –c’ with a simple ability to redirect or print the output (for instance the command ‘gzcat | lpr’ will print the contents of the compressed file called ). Linux Virtual Terminal Control If you just have a command-line interface, you can still switch between several different terminals. The Alt-Function keys can be used to change to six different virtual terminals. • ALT-F1 Switch to Terminal 1 • ALT-F2 Switch to Terminal 2 • ... • ALT-F6 Switch to Terminal 6 cat

Display the contents of a file

head Output the first part of file(s) tail

Output the last part of files

Find The Linux “find” command is probably one of the system security tester’s best friends on any Linux system. This command allows the system security tester to process a set of files and/or directories in a file subtree. In particular, the command has the capability to search based on the following parameters: • where to search (which pathname and the subtree) • what category of file to search for (use “-type” to select directories, data files, links) • how to process the files (use “-exec” to run a process against a selected file) • the name of the file(s) (the “-name” parameter) • perform logical operations on selections (the “-o” and “-a” parameters)

Trainings for Pentester

87/251

One of the key problems associated with the “find” command is that it can be difficult to use. Many experienced professionals with years of hands-on experience on Linux systems still find this command to be tricky. Adding to this confusion are the differences between Linux operating systems. The find command provides a complex subtree traversal capability. This includes the ability to traverse excluded directory tree branches and also to select files and directories with regular expressions. As such, the specific types of file system searched with his command may be selected. The find utility is designed for the purpose of searching files using directory information. This is in effect also the purpose of the “ls” command but find goes much further. This is where the difficulty comes into play. Find is not typical Linux command with a large number of parameters, but is rather a miniature language in its own right. The first option in find consists of setting the starting point or subtrees under which the find process will search. Unlike many commands, find allows multiple points to be set and reads each initial option before the first ”-“ character. This is the one command that may be used to search multiple directories on a single search. The paper, “Advanced techniques for using the Linux find command” by B. Zimmerly provides an ideal introduction into the more advanced features of this command and is highly recommended that any system security tester become familiar with this. This section of the chapter is based on much of his work. The complete language of find is extremely detailed consisting of numerous separate predicates and options. GNU find is a superset of the POSIX version and actually contains an even more detailed language structure. This difference will only be used within complex scripts as it is highly unlikely that this level of complexity would be effectively used interactively: -name True if pattern matches the current file name. Simple regex (shell regex) may be used. A backslash (\) is used as an escape character within the pattern. The pattern should be escaped or quoted. If you need to include parts of the path in the pattern in GNU find you should use predicate ”wholename” “-(a,c,m)time” as possible search may file is last “access time”, “file status” and “modification time”, measured in days or minutes. This is done using the time interval in parameters -ctime, -mtime and -atime. These values are either positive or negative integers. -fstype type True if the filesystem to which the file belongs is of type type. For example on Solaris mounted local filesystems have type ufs (Solaris 10 added zfs). For AIX local filesystem is jfs or jfs2 (journalled file system). If you want to traverse NFS filesystems you can use nfs (network file system). If you want to avoid traversing network and special filesystems you should use predicate local and in certain circumstances mount “-local” This option is true where the file system type is not a remote file system type. “-mount” This option restricts the search to the file system containing the directory specified. The option does not list mount points to other file systems. “-newer/-anewer/-cnewer baseline” The time of modification, access time or creation time are compared with the same timestamp in the file used as a baseline. “-perm permissions” Locates files with certain permission settings. This is an important command to use when searching for world-writable files or SUID files. “-regex regex” The GNU version of find allows for file name matches using regular expressions. This is a match on the whole pathname not a filename. The “-iregex” option provides the means to ignore case. “-user” This option locates files that have specified ownership. The option” –nouser” locates files without ownership. In the case where there is no user in “/etc/passwd” this search option will find matches to a file’s numeric user ID (UID). Files are often created in this way when extracted from a tar acrchive. “-group” This option locates files that are owned by specified group. The option, “-nogroup” is used to refer to searches where the desired result relates to no group that matches the file’s numeric group ID (GID) of the file “-xattr” This is a logical function that returns true if the file has extended attributes. “-xdev “ Same as the parameter “-mount primary”. This option prevents the find command from traversing a file system different from the one specified by the Path parameter.

Trainings for Pentester

88/251

“-size” This parameter is used to search for files with a specified size. The “-size” attribute allows the creation of a search that can specify how large (or small) the files should be to match. You can specify your size in kilobytes and optionally also use + or - to specify size greater than or less than specified argument. For instance: • find /usr/home -name “*.txt” -size 4096k • find /export/home -name “*.html” -size +100k • find /usr/home -name “*.gif” -size -100k “-ls” list current file in “ls –dlis” format on standard output. “-type” Locates a certain type of file. The most typical options for -type are: • d A Directory • f A File • l A Link Logical Operations Searches using “find“ may be created using multiple logical conditions connected using the logical operations (“AND”, “OR” etc). By default options are concatenated using AND. In order to have multiple search options connected using a logical “OR” the code is generally contained in brackets to ensure proper order of evaluation. \(-perm -2000 -o -perm -4000 \)

For instance

The symbol “!” is used to negate a condition (it means logical NOT). “NOT” should be specified with a backslash before exclamation point ( \! ). For instance

find . \! -name “*.tgz” -exec gzip {} \;

The “\( expression \)” format is used in cases where there is a complex condition. For instance

find / -type f \( -perm -2000 -o -perm -4000 \) -exec /mnt/cdrom/bin/ls -al {} \;

Output Options The find command can also perform a number of actions on the files or directories that are returned. Some possibilities are detailed below: “-print” The “print” option displays the names of the files on standard output. The output can also be piped to a script for post-processing. This is the default action. “-exec” The “exec” option executes the specified command. This option is most appropriate for executing moderately simple commands. Find can execute one or more commands for each file it has returned using the “-exec” parameter. Unfortunately, one cannot simply enter the command. For instance: find . -type d -exec ls -lad {} \; find . -type f -exec chmod 750 {} ‘;’ find . -name “*rc.conf” -exec chmod o+r ‘{}’ \; find . -name core -ctime +7 -exec /bin/rm -f {} \;

Trainings for Pentester

89/251

find /tmp -exec grep “search_string” ‘{}’ /dev/null \; -print An alternative to the “-exec” parameter is to pipe the output into the “xargs” command. This section has only just touched on find and it is recommended that the system security tester investigate this command further. A commonly overlooked aspect of the “find” command is in locating files that have been modified recently. The command:

find / -mtime -7 –print

displays files on the system recursively from the ‘/’ directory up sorted by the last modified time. The command:

find / -atime -7 –print

does the same for last access time. When access is granted to a system and whenever that file is run, the file times change. Each change to a file updates the modified time and each time a file is executed or read, the last accessed time is updated. These (the last modified and accessed times) can be updated using the touch command. A Summary of the find command Effective use of the find command can make any security assessment much simpler. Some key points to consider when searching for files are detailed below: • Consider where to search and what subtrees will be used in the command remembering that multiple piles may be selected • find /tmp /usr /bin /sbin /opt -name sar • The find command allows for the ability to match a variety of criteria -name

search using the name of the file(s). This can be a simple regex.

-type

what type of file to search for ( d -- directories, f -- files, l -- links)

-fstype typ

allows for the capability to search a specific filesystem type

-mtime x

File was modified “x” days ago

-atime x

File was accessed “x” days ago

-ctime x

File was created “x” days ago

-size x

File is “x” 512-byte blocks big

-user user

The file’s owner is “user”

-group group The file’s group owner is “group” -perm p

The file’s access mode is “p” (as either an integer/symbolic expression)

• Think about what you will actually use the command for and consider the options available to either display the output or the sender to other commands for further processing -print

display pathname (default)

-exec

allows for the capability to process listed files ( {} expands to current found file )

• Combine matching criteria (predicated) into complex expressions using logical operations -o and -a (default binding) of predicates specified.

Trainings for Pentester

90/251

What is running on the system? Lsof The command, “lsof” allows the system security tester to list all open files where “An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, or a stream or network file”. LSOF is available from ftp://vic.cc.purdue.edu/pub/tools/unix/lsof/lsof.tar.gz Ps The command, “ps” reports a snapshot of the current processes running on Linux host. Some Examples from the “ps” man page of one UNIX system are listed below. To see every process on the system using standard syntax: • ps -e • ps -ef • ps -eF • ps -ely To see every process on the system using BSD syntax: • ps ax • ps axu To print a process tree: • ps -ejH • ps axjf To get info about threads: • ps -eLf • ps axms To get security info: • ps -eo euser,ruser,suser,fuser,f,comm,label • ps axZ • ps -eM To see every process running as root (real & effective ID) in user format: • ps -U root -u root u To see every process with a user-defined format: • ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm • ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm

Trainings for Pentester

91/251

• ps -eopid,tt,user,fname,tmout,f,wchan Print only the process IDs of syslogd: • ps -C syslogd -o pid= Print only the name of PID 42: • ps -p 42 -o comm= Top The command, “top” is distributed with many varieties of Linux. It is also available from http://www.unixtop.org/. The top command provides continual reports about the state of the system, including a list of the top CPU using processes. This command gives much of the information found in the Microsoft Windows Task Manager. The main functions of the program as stated by the developers are to: • provide an accurate snapshot of the system and process state, • not be one of the top processes itself, • be as portable as possible Netstat lists all active connections as well as the ports where processes are listening for connections. The command, “netstat -p -a --inet” (or the equivalent on other UNIX’es) will print a listing of this information. Not all UNIX versions support the “netstat –p” option for netstat. In this case other tools may be used. Installing and updating tools RPM (Redhat Package Manager) is a commonly used tool designed to update packages on some Linux systems. To install a package, you use the “-i” option. To uninstall, use “-e”. At times, you will need to compile your own binaries from source code. To do this, you generally need to configure the file for compilation using the “./configure” script and then run the command “make”. Network Settings Network interface settings can be updated and changed using the ifconfig command: $ifconfig You will see your IP address, netmask, MAC address, and various other nifty items. If you have one ethernet card, you will see two interfaces… the local loopback interface with the address 127.0.0.1 and your ethernet interface, called “eth0”. Shutting down the system When you are done with Linux, the system should be shut down gracefully. You can do this from the GUI, but it is usually done from the command prompt. As root (you may need to su!), to gracefully shut down your system, type: # shutdown –h now The “-h” flag means “halt” the system. Of course, “now” means do it right away. You can actually schedule the system to shutdown at another time using this command, too. You can also use the “shutdown” or “reboot” command to reboot the machine. To reboot, just type: # reboot

Trainings for Pentester

92/251

Section 5: Scanning Goals and Techniques Outcome Statement: Core Topics: • Introduction • Scanning Goals and Types • Overall Scanning Tips • Sniffing with tcpdump • Scapy Packet Manipulation • Scapy/tcpdump Exercise

Introduction Now that you have finished the reconnaissance phase it is time to begin actively engaging the targets identified in our scope as outlined in our rules of engagement. The scanning phase is our next step and it is critical to the success of your penetration test. A successful scan of the target organization will identify all of the hosts and services to use to gain access to our target. Scanning large networks is not without its challenges. Understanding how the protocols and scanners work enable penetration testers to make the most of the time they have in their rules of engagement. Utilities such as tcpdump and scapy are powerful tools in the hands of the trained penetration tester Scanning Goals and Types During the scanning phase the penetration tester hopes to gain a better understanding of the target environment by sending traffic to the target environment and measuring the responses. The penetration tester wants to understand: • Network Address ranges • Firewalls • IP Addresses of Live nodes • Open ports and services • Operating systems of target systems • Vulnerabilities on target systems To collect this information, penetration testers can use a variety of different scans and techniques. This includes Network Sweeps where the penetration tester will attempt to identify live hosts on the target network. It includes Network Tracing where the tester will attempt to identify the network topology of the target organization such as the existence of firewalls, routers and subnets and how they are connected in the target environment. Port scanning allows the tester to identify TCP and UDP ports that are accessible. Any open ports discovered can be further probed to identify the running Operating System with Operating System Fingerprinting. Next the tester can use Version Scanning to identify the version of protocols and services in use on the target systems. Finally the tester uses Vulnerability scans to identify unpatched and misconfigured systems in the target environment. Overall Scanning Tips This section outlines several tips and techniques to make your scans more effective. One significant issue is the amount of time required to complete our scans. Internal network scans and even some large network scans can take significantly more time to complete than the typical rules of engagement will permit. There are several ways to handler large scans. They include but are not limited to the following:

Trainings for Pentester

93/251

• Scan target networks by IP address and not by host names. This will avoid problems that can be introduced by Round Robin DNS. • Scan a representative sample subset of target machines rather than all of them. Downside – “representative samples” are often not a true reflection of the actual environment and vulnerabilities can be easily overlooked. • Scan target ports for the most used services rather than all 65536 ports. Downside – Services configured to use non-standard ports and less common services on their default ports will be overlooked. • Review the target environments firewall rule set and only scan services that you know are open. Downside – Rouge hosts and connections are often not reflected in the firewall rules. Misconfiguration and bugs in firewalls are overlooked using this method. • Scan faster by reconfiguring firewall rules to enable faster scanning. Having your firewall send resets to the scanning machine rather than forcing to time out before moving on will significantly speed up your scans. Downside – It requires modifications to the target environment. • Scan faster by using faster network scanners such as ScanRand. Sniffing with tcpdump When scanning the target environment the tester should capture all of the packets sent from the tester to the customer. TCPDUMP is ideal for capturing the packets as we scan because it is fast, can display the packets to the screen, or can save them to disk and has a flexible network filtering syntax. TCPDUMP support Berkley Packet Filter syntax for defining capture filters. PBF filter primitives include the following: • Protocol primitives - ether, ip, ip6, arp, rarp, tcp, and udp. • Type primitives - host, net, port, and portrange. • Address primitives – src, dst Groups of primitives can be combined together with logcal and and or statements. By combining these primitives together the penetration tester can narrow the scope of packets displayed on their screen to only those they are interested in. Here are some examples of BPF filters and the packets that they display. Display the ASCII text to port 21 and a destination IP addres of 192.168.1.1: Tcpdump –nnX “ tcp port 21 and dst 192.168.1.1” Display all UDP packets to or from the IP Address 192.168.1.1: Tcpdump –nn udp and host 192.168.1.1 Scapy Packet Manipulation SCAPY is a very powerful packet crafting, manipulation and analysis tool. Scapy is a set of Python modules that allows the user to create scapy enabled python script or run scapy in “interactive” mode. Once you are in interactive mode scapy provides you with several functions that get more details about SCAPY’s capabilities. The ls() command can be used to get more information about scapy’s supported protocols. If you run the ls() command without anything in the parenthesis scapy will give a list of all of SCAPY’s protocols. Putting one of these protocols inside the parenthesis will list the fields associate with that protocol. Scapy also supports the LSC() command which will give a list of scapy functions. You can get more information about the functions that are available by passing the name of the function you want more information on to the help() function. For example, help(fuzz) will give you more information on the fuzz() function. Crafting packets with SCAPY Perhaps SCAPY’s most common use is to craft packets. Packets can be created by calling the methods associated with the specific protocol and passing the fields in that protocol as parameters to the protocol function. For example: >>>newpacket=TCP(src=”192.168.1.1”, dst = “192.168.1.2”, dport=80)”

Trainings for Pentester

94/251

This will create a packet containing a TCP packet from the source IP address of 192.168.1.1 and a destination of IP address 192.168.1.2 and port 80. Since they have not been explicitly defined, the IP and Ether (Ethernet) layers will be populated by the defaults associated with these protocols. You can explicitly define each layer in the protocol stack and add protocols together with the “/” character. The new layers can be the results of other scapy objects or other methods. For example, we can combine our existing “newpacket” variable with another layer like this: >>>completepacket=IP(dst=”192.168.1.2)/newpacket Or we can explicitly define each layer using scapy protocol methods: >>>Newudppacket=IP(dst=”192.168.1.2”)/UDP(dport=1000)/”THIS IS MY UDP PAYLOAD” This will create a “Newudppacket” object containing a UDP packet to a destination IP address of 192.168.1.2 and a destination port of 1000 with a payload of “THIS IS MY UDP PAYLOAD”. Here is another example: >>>NewTCPPacket=Ether(src=”ff:ff:ff:ff:ff:ff”)/IP(dst=”www.target.tgt”)/TCP(dport=80)/”GET / HTTP/1.0\r\n\r\n” This will create a “NewTCPPacket” object containing a TCP packet to the destination host of “www.target.tgt” port 80 with a payload of “GET /HTTP/1.0\r\n\r\n”. This packet could be transmitted to a webserver to request the defalult page or www.target.tgt after the TCP handshake has been completed. Examining packets with SCAPY Once you have created your packet you have several ways to view the current values in each of the fields. You can simply enter the name of your new packet. For example: >>>NewTCPPacket You can use the .summary() method to get a look at the most important fields in the packet >>>NewTCPPacket.summary() You can see can see more detail with the show() method. >>>NewTCPPacket.show() You can also use the ls() command to look at all of the properties in the packet >>>ls(NewTCPPacket) Your scapy packet objects are constructed as arrays with each protocol making up another dimention in the array. The arrays can be addressed numerically or by the name of the protocol. Within each protocol the fields are addressed by their field name. The inventory of field names are those given by the ls() command. Sometimes fields have the same name on multiple levels. In those cases you must be explicit about which field you are addressing. For example you can access the MAC address field you could do it by number or by name as follows: >>>packet[0].dst and >>>packet[Ether].dst will both return the desintation Ethernet address. The destination IP address also has a field name of .dst so you would pull that information with either >>>packet[1].dst or packet[IP].dst. Another way to step into an encapsulated protocol is with the .payload method. .payload always refers to the protocol embedded in the current level. So if our packet contains a TCP packet inside an IP packet inside an Ethernet packet we can access the IP layer like this: >>>packet.payload We can access the TCP layer like this >>>packet.payload.payload And we can the destination port in the TCP layer like this.

Trainings for Pentester

95/251

>>>packet.payload.payload.dport Addressing your packets The destination and source IP address can be defined as a specific IP address, a network range or an unresolved DNS hostname or a list of any of the above. For example: >>>Packet= IP(dst= ”192.168.1.1”) >>>Packet=IP(dst=”10.1.1/24”) >>>Packet=IP(dst=www.target.tgt) >>>Packet=IP(dst=[“www.target.tgt”, “192.168.1.1”, “10.1.1/24”]) Lists which are encoded in square brackets and separated by commas and ranges which are two numbers in parenthesis separated by a comma can also be used when defining other fields such as TCP and UDP Ports. For example: >>>Packet=IP()/TCP(dport=(1024, 65535)) >>>Packet=IP()/TCP(dport=[22,80,443[) >>>Packet=TCP(dport=[22,80,443,(1024-65535)]) Scapy will expand address and port ranges and resolve hostnames when they are used. If you want to see how they are going to be expanded you can force scapy to expand them by using pythons “list comprehension”. List comprehension has the format [ variable input-list conditional-statement ]. We can expand our list of ranges by using list comprehension with no condistional statements. For example, >>>[ p for p in Packet] Would resolve and expand all of the ranges and list of destination addresses and ports into individual packets. Sending your crafted packets Once you have your packets crafted you can transmit them with one of scapys packet sending functions. Send() – Transmits packets at the IP layer (layer 3) and doesn’t look for any response Sendp() – Transmits packets at the Ethernet packet layer (layer 2) and doesn’t look for any response Sr() – Sends the packet at the IP layer and waits for a response Srp() – Sends the packet at the Ethernet layer and waits for a response Sr1() – Sends the packet at the IP layer and only captures the first response Srp1() – Sends the packet at the Ethernet layer and only captures the first response Most of these packet sending functions support multiple options for packet transmition. A list of the options can be obtained with the ls() command such as >>>ls(sr) or >>>ls(send). These options include things such as: retry:

if positive, how many times to resend unanswered packets if negative, how many times to retry when no more packets are answered

timeout: how much time to wait after the last packet has been sent verbose: set verbosity level multi:

whether to accept multiple answers for the same stimulus

Trainings for Pentester

96/251

filter: provide a BPF filter Handling responses to sent packets Responses to packet transmissions are returned in a python data structure known as a tupel. A tupel is a fancy name for an array of fixed dimension. In the case of packet responses they are returned in two fixed arrays. The first array contains ANSWERED packets and the second array contains UNANSWERED packets. We can capture the datastructures like this: >>> answered, unanswered = sr(mypacket, timeout=60) “answered” will contain an array of packets that were responses to the “mypacket” being transmitted. “Unanswered” will contain the portions of mypacket that were transmitted but didn’t get any response. Any individual entry in the “answered” array can further be broken down into “transmitted packets” and “received packets”. For example: >>> transmitted, received = answered[0] “Transmitted” will contain what was sent to the remote host and “Received” will contain the response. SCAPY Loops To transmit packets repeatedly you can use SCAPY loops or normal python loops. The srloop() can be used to send scapy packet objects repeatedly to a host. Srloop() will expand lists and ranges and resolve host names as it transmits the packets. As a result srloop() can be used to do network and port scanning. Python for loops can also be used to repeatedly transmit packets. For example, >>> for x in packets: …

send(x)

Putting it all together as a Port Scanner Now that you have an understanding of python and scapy contructs you can put them together to conduct various scans against in scope targets. Here are some examples: Conduct a network sweep for HTTP servers: >>> packet=IP(dst=”10.10.10/24”)/TCP(dport=80,flags=”S”) >>> ans,unans=sr(packet) Conduct a port scan on a specific host: >>> packet=IP(dst=”10.10.10.50”)/TCP(dport=(1,1024),flags=”S”) >>> ans,unans=sr(packet) Inspecting the “ans” variable will reveal how hosts responded to your SYN scans. Reading and sniffing packets Scapy can also sniff network packets off of the wire or read and write packet captures. >>>Packets = sniff ( iface = ‘eth0’, count = 10000 , filter = ‘tcp port 80’) Sniff accepts the following options: count: number of packets to capture. 0 means infinity store: whether to store sniffed packets or discard them

Trainings for Pentester

97/251

prn: function to apply to each packet. If something is returned, it is displayed. Ex: ex: prn = lambda x: x.summary() lfilter: python function applied to each packet to determine if further action may be done ex: lfilter = lambda x: x.haslayer(Padding) offline: pcap file to read packets from, instead of sniffing them timeout: stop sniffing after a given time (default: None) L2socket: use the provided L2socket opened_socket: provide an object ready to use .recv() on >>>Packets = rdpcap(“name of the packet capture.pcap”) RDPCAP reads in a pcap file and accepts the name of the packet capture and “count=” followed by the number of packets to read as parameters. >>>wrpcap(“name of the packet capture.pcap”, packets) WRPCAP writes a pcap file and accepts the name of the file to create and the packets to write >>>wireshark(packets) WIRESHARK launches wireshark displaying “packets”. Fuzzing SCAPY can also be used to fuzz network protocols. The fuzz() function will replace any fields that are not explicitly defined to random values. To use the function, pass a packet as a parameter and fuzz produces aa modified packet. For example: Fuzzedpacket = fuzz(IP(dst=”192.168.1.1”)) will produce a “fuzzedpacket” object containing a packet that has random data in some field other than the destination IP address. Because the destination IP address is explicitly defined fuzz will not attempt to put random data in that field. Scapy in Python Scripts Up to this point we have seen SCAPY used interactively. Since Scapy is a set of python modules you can use them in any python script by importing the libraries into your script. Here is a sample of a python script that uses scapy functions. #/usr/bin/python From scapy.all import * packet=IP(dst=”10.10.10.50”)/TCP(dport=(1,1024),flags=”S”) ans,unans=sr(packet, timeout=600) Scapy/tcpdump Exercise Configure tcpdump to display all packets that include both your machine’s IP address and the IP address of a host we are going to send packets to. Then scan the host with scapy and the default options. What are scapy’s default options? Configure tcpdump to display the payload of ICMP packets in ASCII format. Use scapy to send ICMP Echo requests once per second with a payload that says “GPENSTUDYGUIDE” and capture the response. Observe the response in tcpdump and in scapy.

Trainings for Pentester

98/251

Use scapy to craft a LAND attack. Configure tcpdump to capture all traffic to and from the desintation host and transmit the packet from scapy. Configure tcpdump to capture all traffic to and from a target subnet range. Use scapy to do an ICMP network sweep of the target network.

Key Questions 1. Besides network address ranges and firewalls, name 3 things you are trying to identify in the target environment during the scanning phase? 2. What are some tips for dealing with large network scans? 3. How does the firewall sending RESETS when a port is closed affect the speed of a scan? 4. Why should a penetration tester use TCPDUMP to monitor transmitted packets rather than WIRESHARK or other more complex sniffers? 5. What is the difference between the ls() and lsc() functions in scapy? 6. What scapy function is used to randomize elements of a packet that are not specifically defined? 7. What function can be used to send one packet at Layer 2 and capture one response? 8. What functions are used to read and write pcap files to and from disk? 9. What options can be passed to the sniff() function to capture DNS requests on interface eth0?

Section 6: Network Tracing, Scanning, and Nmap Outcome Statement In this section I will demonstrate an understanding of the fundamental concepts network tracing scanning and the general use of Nmap. Core Topics: Network Tracing Using Traceroute • Traceroute (Unix/Linux) • Tracert (windows) Port Scanning Fundamentals • TCP Port Scanning • UDP port Scanning Nmap Port Scanning • Nmap OS Fingerprinting • Nmap Version Scanning

Network Tracing using Traceroute Network tracing is performed in order to discover the paths that IP traffic takes as it traverses the network on its way it its destination. This is done in penetration testing in order to develop a network diagram of the target environment.

About the Author

Craig S. Wright GSE GSM LLM MStat Craig Wright (Charles Sturt University)is the VP of GICSR in Australia. He holds the GSE, GSE-Malware and GSECompliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Managing Editor: Katarzyna Zwierowicz [email protected] Associate Editors: Ewa Duranc, Patrycja Przybyłowicz, Ewa Dudzic Betatesters & Proofreaders: Jeff Smith, Cleiton Alves, Hani Ragab, Karol Sitec, Dalibor Filipovic, Eric Geissinger, Amit Chugh, Ricardo Puga, Dan Dieterle, Gregory Chrysanthou, Abhiraj, Harish Chaudhary, Abhishek Kar, Gareth Watters, Eric De La Cruz Lugo, Barry Grumbine, Wayne Kearns, Steven Wierckx, Jakub Walczak Senior Consultant/Publisher: Paweł Marciniak CEO: Ewa Dudzic [email protected] Art Director: Ireneusz Pogroszewski [email protected]

[ GEEKED AT BIRTH ]

DTP: Ireneusz Pogroszewski Production Director: Andrzej Kuca [email protected]

Publisher: Hakin9 Media Sp z o.o. SK ul. Posępu 17A 02-676 Warszawa phone: 0048224273717 [email protected] www.pentestmag.com

Whilst every effort has been made to ensure the high quality of the magazine, the editors make no warranty, express or implied, concerning the results of content usage. All trade marks presented in the magazine were used only for informative purposes. All rights to trade marks presented in the magazine are reserved by the companies which own them.

DISCLAIMER! The techniques described in our articles may only be used in private, local networks. The editors hold no responsibility for misuse of the presented techniques or consequent data loss.

You can talk the talk. Can you walk the walk?

[ IT’S IN YOUR DNA ] LEARN: Advancing Computer Science Artificial Life Programming Digital Media Digital Video Enterprise Software Development Game Art and Animation Game Design Game Programming Human-Computer Interaction Network Engineering Network Security Open Source Technologies Robotics and Embedded Systems Serious Game and Simulation Strategic Technology Development Technology Forensics Technology Product Design Technology Studies Virtual Modeling and Design Web and Social Media Technologies

www.uat.edu > 877.UAT.GEEK

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF