OWASP penetration test/OWASP渗透测试秘籍


This article is written in Chinese and English, and the content is exactly the same. To view the Chinese version, please scroll down.

Penetration Testing

By accumulating test experience, I gradually establish our own systematic test methodology. This article is very long. It is a methodology that I summarized based on the “OWASP Security Testing Guide” and my nearly a year of penetration testing experience. The article involves few technical details, and more about sharing ideas about industry penetration testing. In my opinion, in addition to the necessary techniques, it is more important for senior penetration tester to have their own test matrix. Different from vulnerability mining and web attacks, I think the focus of penetration testing is to improve testing efficiency, while ensuring high code test coverage, and ensuring the security and independence of various components of the business system. I hope to help colleagues who are new to penetration testing quickly establish their own testing architecture.

Information collecting

The completion of the test depends on the completeness of the information collected. The experience of this sentence is getting deeper and deeper in the constant test. In my opinion, it is not only necessary to collect complete information, but also to have the ability to integrate information to build a system. In the test, all information points related to these functions must be covered.

This is not so necessary when testing the business, after all, you will get an asset report, which includes all the objects that need to be covered. But in the red-blue confrontation or doing hacking challenge, you need to focus on collecting similar code leaks, user names, and sometimes you can search for some important documents, login interfaces, and so on. Use keywords such as’site’,’intext’,’inurl’, etc. to perform a precise search.

Web server fingerprint

Identifying the framework used by the website can narrow the scope of our attack. Some Web middleware may have vulnerabilities that can be directly exploited due to old versions. In addition, obtaining web server information to understand its characteristics can also be helpful in our follow-up penetration process, such as when testing file upload bypassing.

The most direct way to identify the fingerprint of a web server is to look at the’Server’ field in the response header of the data packet. However, most manufacturers will try to hide the banner of the web server for security reasons. At this time, there are several ways to help judge.

  1. The HTTP header fields are sorted. Different Web servers have different sorting of fields.
  2. Request a page that does not exist/error report, and observe the response.
  3. Recognize through scanners, online tools, etc.

Web server metafile

The robots.txt lists the directories that are forbidden to crawl. The test guide only writes information about the existence of the robots.txt file. I think there may be more existences that can be discovered here. If other sensitive files are leaked, there will be consequences. Just as deadly. The problem of packaging and downloading front-end source code has occurred many times, and downloading the code directly for audit is equivalent to not wearing clothes. You can also look for the cookie generation rules in the front-end JS code; there may be hard-coded passwords in the comment content; the internal IP, email, account number and other information written in the code during the application test phase may also exist. These mostly exist in small companies/small productions, and large-scale Web applications generally do not have the above-mentioned problems. In addition, Sitemap, .DS_Store, cross domain.xml and other files will also expose sensitive information.

When doing penetration testing, these tasks are generally handed over to the scanner. For websites that cannot be scanned, you can find some fuzzDict to do special testing.

Enumerate web server applications

Explore all the applications running on the web server as much as possible. Sometimes the same IP address may be mapped to different web applications. Depending on the domain name, it will also be mapped to different web applications. I remember that there is a problem on HackTheBox that is virtual hosting. You need to modify the hosts file and bind another domain name to the ip to access the vulnerable web application.

Under the same IP, it may be parsed into different web applications according to the URL

For this kind of hidden web application, if you don’t have the opportunity to browse the directory, you can only hope that it can be found by crawlers/scanners, or you can find site:www.example.com when you search on Google.

For web servers with open multiple ports, we can use tools such as Nmap to scan all ports to prevent manufacturers from configuring some sensitive entrances to high ports, which are usually difficult to find.

For web servers that use virtual hosts, you can find these hidden web applications by querying DNS records or reverse IP queries. Just use online tools.

Identify application entry

According to the explanation of this section in the test guide, Burp intercepts the request to test the parameters, that is, the parameters passed into the application. According to actual experience, some parameters are framework parameters, which will exist in almost all requests. The names of these parameters may be symbols or various abbreviations. You can test for these frequently occurring parameters first. This will greatly save time for the entire test process, and also clarify the meaning of these parameters, and avoid a situation where a lot of parameters do not know which one to test.

Map the execution path of the application

In the face of a huge web application in the test, it is difficult to achieve full coverage of the code base, and can only try to test more codes. According to the test guide, the methods to improve code test coverage are summarized into three types: path, data flow and competition.

From the perspective of black box testing, all we can see is the path in the URL. When you first get the application, it is best to open an excel sheet or a structured notepad to record a few paths that are prone to problems. Yesterday I discovered an api interface unauthorized issue during testing. It is speculated that it should be a common phenomenon. But because I didn’t record the path at the beginning, it was painful to search again. Another point is that when recording the path/api, be sure to indicate the entry link, otherwise the loopholes will be found at the end, and you will completely forget where you came from.

Identify web application framework

Just like identifying a web server, knowing that the application framework can start with a library of known vulnerabilities, go through it first. Because of the special business, some places cannot be repaired according to the repair suggestions provided by the manufacturer according to the framework. It is all repaired by the web application manufacturer. It is inevitable that there will be some omissions. If you feel something is wrong during the test, you can try to bypass it.

As for identifying web application frameworks, one is based on your own experience. For example, if you are an experienced tester, you have tested many frameworks, and you can tell through a page or an error message. Sometimes manufacturers deliberately hide the characteristics of the framework. In this case, some online tools can be used for identification.


  • netcraft. Online tool to identify the basic information of the web server and the page. What’s that site running?

  • Nmap. Scan the port to check the service and version, and look at the parameters in the manual when you use it.

  • BurpSuite. There is nothing to say, 2.0 linkage between sqlmap and Xray is very easy to use.

  • WhatWeb. Identify web applications. WhatWeb-Next generation web scanner.

  • BlindElephant. The principle is to identify different versions of static files based on different check values, so the accuracy is very high. BlindElephant

Configuration management test

For penetration attacks, when mining SRC, configuration testing may be rarely involved. But at work, the security of the configuration must be verified before the business goes online. Here is more to test whether the configuration of the product to be launched is safe or not from the perspective of the manufacturer. A very important security idea is involved in this, that is, security cannot be relied upon. For example, my web application has SQL injection vulnerabilities, but the front end has waf protection. The attacker does not seem to be able to cause harm to the web application. But in fact, this is unsafe. Just like a series circuit, a light bulb with one circuit broken can not light up. The same idea is migrated from the Web application level to the Web server/network configuration level. You cannot rely solely on Web application security to ensure the security of the server, and the server itself must be configured correctly to ensure maximum security.

Network and Infrastructure

First identify all components and ensure that these components do not have known vulnerabilities, and the systems used to manage these components cannot have vulnerabilities. Strictly control access to these components and maintain a list of ports required by an application.

It is difficult to test the server, and some automated tools or scripts are generally used. Be cautious when using tools, which may cause server downtime/denial of service. The same situation also exists in web testing. Automated tools have false positives when testing the Web server. The false negatives are because some administrators delete or confuse the version information in order to hide the server information, but the tool cannot correctly detect the server component version. The false alarm is because the administrator fixes the known vulnerabilities with patches, but does not update the version of the web server.

As a tester, when testing the host, he usually uses a scanner to scan. The reports are mostly weak cipher suites, support for low version protocols, etc. It is difficult to detect false negative vulnerabilities. Here again, the idea that security cannot be relied upon is used. Developers/operations and maintenance personnel should abide by the security development rules for server configuration, and operation and maintenance personnel should repair newly announced vulnerabilities in a timely manner, and cannot rely solely on penetration testing to find vulnerabilities.

Application platform configuration

Web applications may have demo and test pages left. Or some configurations set up for convenience in the test environment, including but not limited to simultaneous login with the same account, universal password/verification code.

Black box testers do not have configuration guides, so they will be a little blind to test. According to the summary on the OWASP test guide, I combined my own test and summarized it

  • Send several malformed parameters like negative values, characters, etc., to check whether the Debug mode is closed.

  • Continue to send malformed parameters, request a file that does not exist, and see that the returned page does not contain error information.

  • Look at the log, and check whether all additions, deletions, and changes are in the log, and whether the principle of separation of three members (admin, audit, user) is complied with.

  • Middleware configuration files, website configuration files cannot be accessed

  • View all kinds of weird functions in the admin panel.


The importance of log files is self-evident. The recording, management, and storage of logs in web applications must be tested.

First of all, it should be avoided that log files cannot reveal sensitive information. Related to it is the encryption of information. Whether the encryption algorithm is reliable or not must be considered here and retrogradely.

Is the log only accessible to log auditors? Is the log undeletable? Who will audit the records of the audit log? There are many thoughts about the separation of three members.

Is the log stored on the log server, is the maximum storage limit set for the log, and what to do when the maximum storage limit is reached?

Sensitive Documents

The incomplete cleaning of sensitive files when the application is launched after the development is completed, and the incomplete cleaning of sensitive files after the update will cause the leakage of sensitive files. It may be the server configuration file, or the source code may be leaked. When testing, one is based on experience, the other is scanning by the scanner, and the third is occasionally found in the comments (but now the scanner will also scan the comments for sensitive content by the way), and there is Google hacking. Sometimes, although the sensitive files are deleted, the location of the sensitive files will remain in the Google hacking database because of the previous existence, so that similar files can be inferred.

HTTP method test

Other methods of HTTP may cause security issues for web applications

  • The PUT method allows attackers to upload files to the server, the most classic IIS PUT vulnerability.
  • The DELETE method allows an attacker to delete files on the server.
  • CONNECT allows an attacker to use a web server as a proxy.
  • TRACE was initially considered harmless, but was later found to be used for cross-site tracking (CST).

When testing, use OPTIONS requests to check which requests are supported, or try them one by one. Note that some frameworks allow the use of the HEAD method instead of GET, which will cause the role-based access control to override the authority. In addition, the two methods of GET and POST should be tested for the override of authority.

At present, most manufacturers require mandatory HTTPS, test whether HTTP can be used.

Identity Management Test

The identity management mentioned here does not specifically refer to rights management, but refers to some management-related tests involved in the user registration process.


The well-known network application registration functions can be roughly divided into several types

  • Backstage registration, registration interface is not open
  • Invitation code registration, such as Hack The Box
  • Mobile number/email registration
  • Account password registration

For penetration testing, the first type of background registration is not within the scope of our testing, or not within the scope of testing discussed in this section. The weak passwords involved in this kind of registration, test password residues, etc. can be placed in the subsequent authentication test.

In invitation code registration, the main problem lies in the security of the invitation code. Can it be guessed? Can it be reused? What is the process of obtaining the verification code? Needless to say, if the verification code contains the permissions that the registered user should have, if the invitation code only verifies that the user has the registration qualifications, and does not determine the user permissions, security should be discussed with the following two.

Mobile phone number, email, and even social platform registration is the most popular way to register today. Compared with the previous two, this kind of registration is open to anyone. When doing the test, it mainly focuses on the following aspects

  • Can the same user/identity register multiple times
  • Can register users with multiple permissions
  • Whether the entered email address/mobile phone has been verified

Regarding the last account password test, it is actually similar to the third one, except that the account is divided into system issuing and user input. Accounts issued by the system must pay attention to the randomness and unpredictability of the account (depending on the situation or not), and the user’s input is to verify the repetition and conformity to the unified format.

Account enumeration

Usually, when testing, there are horizontal unauthorized loopholes, and we will collect other users’ identity authenticators, similar to UID and other information. For Web applications, this sensitive parameter used to authenticate users must in principle ensure that it is difficult to enumerate. For example, the user id of WeChat is very complicated and difficult to be enumerated. Those who are interested can check it out.

As a tester, you can use the following methods to try to collect user IDs

  • Web application response. When sending an HTTP request, change the uid to observe whether the response is consistent, or whether it will prompt the user that it does not exist. The response range can also be error, 404/403/200, etc.
  • Collected in URI. Some web applications use the routing function, which can be seen from the URI. At this time, you can find places similar to friends list, follow/fan list, comment area, etc. where other users appear for collection
  • Rule speculation. Registering multiple users to guess the principle of username generation usually involves the intervention of time stamps and registration information.

Authentication test

Authentication refers to confirming or confirming that the behavior of a person or business is true and credible. In network security, authentication is the process of attempting to verify the digital identity of the communication initiator. For example, login is the simplest verification process. Relative to security personnel, testing and verification means understanding the verification mode, testing whether vulnerabilities or strategies can be used to bypass authentication.

Transmission test

Today, all web application products launched by large-scale manufacturers should be mandatory to support HTTPS. In the black box test, we mainly focus on the process of data transmission, such as when the user name and password are entered on the login page, is it forced to use the HTTPS protocol to send through the POST request.

The first must be an HTTPS request, in order to prevent MITM attacks that are often seen in textbooks, that is, man-in-the-middle attacks. Nobody wants everyone to be in the coffee shop and be spoofed by a hacker at the next table, and then you visit an HTTP website, and all the access records run naked on the hacker’s computer. This precaution is really not to be underestimated, because no one will write the gateway to a specific address after going out of home or work, and arp cheating is always impossible to prevent.

Second, why use POST requests instead of GET requests. Although HTTPS is at the fifth layer of the seven-layer protocol, the data in the GET request is also encrypted and protected, but the URL records in the GET request are often stored in log files, Access records are generally stored in plain text, which increases the risk of sensitive information being leaked.

The last thing is to verify that the referer is also an HTTPS web page, otherwise there will be an SSL-Strip attack.

Account password test

The first is the password strength test. If the general web application has a registration function, it will require the use of more than 8 numbers + letters + one uppercase + special characters. For black box testing, you can take a look at the password rules at the place where the account password is created to test.

Then there is the weak password test. Generally, in the development phase, developers will log in with a set of passwords that are easy to enter. At the same time, some web applications require a verification code when logging in, and developers will also leave a set of universal verification codes for convenience. The purpose of our test is to find these possible weak passwords. In addition, does the new user created by the administrator use the default password? Is it mandatory to change the password when logging in for the first time?

Another perspective is lock, such as how long to lock the account when the password is entered more than once incorrectly. The specific strategy should be related to the sensitivity of the web application, but there should be no doubt that this mechanism should exist to prevent the password from being brute force.

Authentication bypass

Here we only discuss the bypassing of some authentication functions, and issues such as Broken Access Control are not discussed here. The most common way to bypass the login box is SQL universal password injection. In addition, in ancient times, there were some web pages that did not do login verification. It should be the display page after login. As a result, you can directly access it by entering the URL in the browser. Now there are still some methods to identify the login status through parameters, similar to “authorized”, etc., and some developers are smart enough to change the names of these paragraphs or write them in the Cookie. It feels difficult to be discovered. In fact, when you touch it Just broken.

Authorization test

This authorization test is not to say that your test is authorized, but to test the authorization itself. In fact, I think it is better to call “What you can do”test, but it is written in the OWASP guide, so I just copied it.

Directory traversal/file inclusion

To find such test points during black box testing, the main concern is

  • Whether there are request parameters for file operations
  • Whether there is an abnormal file extension
  • Is there an interesting variable name
  • Can it be determined that the web application dynamically generates page content through cookies

What we see a lot is that we find directory traversal or the final RCE need file inclusion in the CTF topic, but in the test, these are easy to use, and it is difficult to go to the public network to repair simple vulnerabilities. Therefore, we are paying attention to the URL at the same time, Also pay attention to some API interfaces. Nowadays, front-end and back-end interactions often use json, xml, etc. The parameters in these data may also be file identifiers. Sometimes paying more attention to these places will bring unexpected gains. It’s a bit like the last cookie content to dynamically generate a webpage, essentially the file identifier is written in the cookie.

Of course, there are some skills when testing. You need to be familiar with the Web environment and construct different test payloads for different servers/middleware. There is another interesting loophole here, called relative path coverage. Tianshi classmate wrote an article to explain it very clearly.

Exploration of RPO attack methods-FreeBuf network security industry portal

The WeChat applet tested a few days ago, after uploading the picture, the picture was stored in a special file server, and the picture storage address was included in the return package. Follow up to see that there is a directory traversal. I feel that directory traversal is to be more careful, there is no technology at all, you can always find it if you look carefully.

Authorization bypass

Can I still use the function after the account is registered and then logged out? Support simultaneous online accounts. Can the function be used when one PC is offline and the other is offline? Are you redirected if you are denied access to a high-privilege page? Does the front end leak information? Is the API unauthorized to use?

For the last API problem, there are actually a lot of problems, which requires a good development. I tested a system two weeks ago. During the initial test, I reported an API unauthorized issue. After the retest, I found that the feedback API was repaired, and other APIs were still unauthorized. If such problems are found, it is best to conduct API audits together with the development, and use a set of authentication logic in a unified manner.

There are many ways to bypass the permission judgment logic. The most basic replacement id, parameters, etc., can also replace the HTTP method from POST to GET.

There are also some logical problems, such as changing the password page/registration page atmosphere Pages 1 to 5, but you can directly enter page4 in the address bar to jump without going through page3.

Permission escalation

According to the law of ternary separation, a set of business system atmosphere administrators, auditors and users can also count unlogged users as a role. When testing, distinguish all roles according to different business systems and sort out the functions they can use. For example, all companies will have a customer service hotline. This kind of customer service call is not the kind that is answered with a mobile phone, but is connected to a set of customer service telephone system and assigned to all salesmen to answer. Can you imagine what kind of role this business system will have? First of all, the administrator and log auditor in the ternary discrete model must exist, and the role of the user is split in this scenario. The first is the front desk operator. They should not have the authority to view the user’s personal data and the information of the user’s consulting equipment. They are only responsible for listening to the user’s needs and transferring the user to the corresponding operator. The front desk operator should be able to view the access numbers and information of all salesmen. After receiving a call from a customer, a specific salesperson should be able to view customer information and information about the business equipment that his department is responsible for, but cannot view other departments.

To give another example, how many roles should a set of SRC system have? First of all, it is the principle of ternary separation, in which users can be divided into two types: enterprises and white hats. White hats can submit vulnerabilities to the corresponding companies, but can only view the vulnerabilities submitted by themselves. You can view some of the information of other white hats. ; Companies can view all their own related vulnerabilities, but not friends. In addition, there are users who are not logged in. They cannot view the information of the white hat, but can browse part of the content and so on.

And so on, the core idea is to clarify the role and function characteristics of each system, and then test.

In the test process, it is necessary to pay attention to the fact that it is not invisible and there is no unauthorized access. You may be able to access it by directly entering a URL that can only be viewed by high-privileged users. Some operations of high-privileged users on high-privileged pages, such as refreshing, editing, and liking, may cause information leakage due to unauthorized access, so be careful.

Session management test

The core component of any Opportunity Web application is a mechanism used to control and maintain the state of interaction between website users and its users. This mechanism is called session management, which is, intuitively speaking, Cookie and Session. In the penetration test, if the cookie can be obtained, it is equivalent to the account. However, the current high-security system will avoid this situation. The replacement of the cookie can be attributed to a vulnerability called cookie unauthorized access. However, in the vast majority of small and medium-sized public network systems, there is a widespread replacement of Cookie login, so the security of Cookie is very important.

Session framework bypass

Some information such as user id, permission identification, token, etc. will be stored in the cookie, but it is usually encrypted by algorithm. First of all, you can find if there is any leakage of the cookie generation logic from the front-end JS code, and you can find it after a scan. If not, you can try to collect a large number of cookies to find a certain pattern for brute force cracking attacks, but this method is extremely not recommended, and the efficiency is too low. I think it is completely unnecessary in the daily heavy penetration test.

There is another way of thinking. Although all transmitted data packets carry cookies, the verification of cookies is very strange in some places, and deleting cookies may also lead to bypassing.

The Cookie in the data packet is mandatory for encrypted transmission. If it is not encrypted, the security is almost zero. The special case is that there is a random user id with more than 8 digits in the cookie, and the background authenticates the authority according to the id.

Chrome has a plug-in called Edit This Cookiehttps://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg/related?hl=en-US to view the properties in the cookie. Taking CSDN as an example, the value of each Key corresponding to the Cookie can be modified above, and the following can check whether several key attributes in the Cookie have been set. The cookie here is not logged in.

The Secure attribute means that high-speed browsers only add cookies when the request is sent through the secure channel of HTTPS, which can prevent cookies from being transmitted in plain text; the HttpOnly attribute can prevent XSS from obtaining cookies, and the client is not allowed to manipulate cookies through scripting languages such as Js; the Domain attribute should Set as the server that needs to accept the changed cookie. Note that it should be specified to a certain server, not the entire 2nd and 3rd level domain names. The Expries attribute manages the cookie expiration time, and this value should be set to a reasonable range.

Session curing test

When the application does not update the session cookie after the user is successfully authenticated, the attacker may find a fixed loophole home, forcing the user to log in using the cookie known to the attacker. In this way, the cookie does not change when the login is successful, and the user’s cookie is naturally leaked. During the test, pay attention to observe whether the Cookie changes before and after the successful login.


CSRF vulnerabilities are also caused by improper session management. I won’t introduce the principle of CSRF here. If you don’t know, you can check it yourself. In the actual mining process, we should focus on whether there is CSRF at the core function. For example, change password, reset password, transfer, purchase, delete, add, etc. This is a very cumbersome process. You can’t just judge whether there is a Refer field in the data packet, because some applications are developed by teams, and different programmers are responsible for different functional modules, and the level of security development of programmers is not the same. Some programmers will not check the Refer field transmitted in the data packet.

Here I share my experience. During the penetration test, every time I start testing a new function/module, I will try SQL injection/XSS. Some programmers with a relatively high level of security development will perform SQL statement parameterized queries, and XSS front-end escape defense methods. For a module developed by a programmer with a high level of security, I will verify whether one or two CSRF vulnerabilities exist. If the Refer is verified, then I will think that there is no CSRF in this module. For modules where there are SQL injection filtering keywords, XSS blacklist filtering, etc., which are not very secure defense methods, I will try my best to test every point with CSRF vulnerabilities. Although this experience cannot help improve the accuracy of CSRF detection, it can greatly improve the efficiency of the entire penetration testing process.

CSRF exists in every GET/POST request, and JSON format data can also have CSRF. Here is another article I wrote [CSRF Attack in JSON Context](https://ama666.cn/2021/02/08 /JSON%E6%83%85%E6%99%AF%E4%B8%8B%E7%9A%84CSRF%E6%94%BB%E5%87%BB/#more). Use BurpSuite integrated CSRF POC Generator to quickly generate test POC.

Session curing test

When the application does not update the session cookie after the user is successfully authenticated, the attacker may find a fixed loophole home, forcing the user to log in using the cookie known to the attacker. In this way, the cookie does not change when the login is successful, and the user’s cookie is naturally leaked. During the test, pay attention to observe whether the Cookie changes before and after the successful login.


CSRF vulnerabilities are also caused by improper session management. I won’t introduce the principle of CSRF here. If you don’t know, you can check it yourself. In the actual mining process, we should focus on whether there is CSRF at the core function. For example, change password, reset password, transfer, purchase, delete, add, etc. This is a very cumbersome process. You can’t just judge whether there is a Refer field in the data packet, because some applications are developed by teams, and different programmers are responsible for different functional modules, and the level of security development of programmers is not the same. Some programmers will not check the Refer field transmitted in the data packet.

Here I share my experience. During the penetration test, every time I start testing a new function/module, I will try SQL injection/XSS. Some programmers with a relatively high level of security development will perform SQL statement parameterized queries, and XSS front-end escape defense methods. For a module developed by a programmer with a high level of security, I will verify whether one or two CSRF vulnerabilities exist. If the Refer is verified, then I will think that there is no CSRF in this module. For modules where there are SQL injection filtering keywords, XSS blacklist filtering, etc., which are not very secure defense methods, I will try my best to test every point with CSRF vulnerabilities. Although this experience cannot help improve the accuracy of CSRF detection, it can greatly improve the efficiency of the entire penetration testing process.

CSRF exists in every GET/POST request, and JSON format data can also have CSRF. Here is another article I wrote [CSRF Attack in JSON Context](https://ama666.cn/2021/02/08 /JSON%E6%83%85%E6%99%AF%E4%B8%8B%E7%9A%84CSRF%E6%94%BB%E5%87%BB/#more). Use BurpSuite integrated CSRF POC Generator to quickly generate test POC.

Input verification test

The first sentence I heard during the process of learning penetration testing was “All input is unsafe”. This sentence is very classic, and it is our mindset during the penetration process. SQL injection, XSS, file upload, RCE, etc. are all derived from user input. This section mainly summarizes how to quickly and accurately find vulnerabilities in the face of various input-receiving functions.


XSS is probably the most commonly seen vulnerability in penetration testing, and there is nothing unusual in terms of testing methods. If the front end is escaped, you don’t need to watch it. If you filter it, just fuzz it. I think the more important thing about XSS in testing is that it must have a wide coverage, and try XSS wherever you have the opportunity to write HTML. Not only the comment area, such as the name of the uploaded image/file, but also various unexpected places. One of the most peculiar XSS I have seen is a scheduling bulletin board, where the editing data packet is sent on the editing bulletin page, and there is a key called class, which probably means classification. I tried to add XSS in the value corresponding to this key. After sending, there is no XSS display on this page, and I didn’t even find where the echo is. When I stepped back to the page where I browsed all the bulletin titles, I started XSS. F12 looked at the source code and found that the value of the class was inserted into the HTML code on this page.

For more automated XSS testing, it is recommended to use Xray linkage. The passive scanner developed by P Niu is really easy to use. The disadvantage is that hundreds of XSS will be inserted. It may be a little troublesome to test the function later.

HTTP method tampering

All HTTP request methods in HTTP1.1, except for the first one, are prohibited as far as possible.

-GET, POST normal business support methods
-OPTIONS query supported HTTP methods, execution in different directories may have different effects
-PUT can use this request to upload files to the server
-DELETE can delete files
-TRACE can traverse firewalls and proxies, loopback diagnosis
-The CONNECT CONNECT method is reserved for the HTTP/1.1 protocol, which can change the connection to a proxy server in a pipe mode. Usually used for communication between SSL encrypted server link and non-encrypted HTTP proxy server.

In most cases, HTTP dangerous methods can be found directly by the scanner, and the coverage of the scanner is definitely wider than that of people. When some business systems need to log in to access certain paths and the scanner does not support login scanning, it is necessary to stop work and test the accessible directories after login.

HTTP parameter pollution

Using the same name for multiple HTTP parameters may cause the application to behave in an unpredictable manner. This vulnerability once affected the core rule base of ModSecurity SQL injection. The ModSecurity filter can filter the string select 1,2,3 from table normally, so when the string appears in the URL for a GET request to query the database, it will be filtered. But the attacker can make the ModSecurity filter accept multiple inputs with the same name, and can construct such a URL http://domain/?query=select 1&query=2,3 from table will not start the filter, but it can be composed at the application layer The complete SQL query statement.

Here is how different languages/middleware handle the same-named parameters, some are separated by special symbols such as commas, some only take the first one, and some only take the last one.

SQL injection

I believe everyone is familiar with SQL injection. This should be the first vulnerability in learning security, and one of the most frequently discussed vulnerabilities. Early risers are more rampant. With the improvement of security awareness, the number of SQL injection vulnerabilities is declining, but Still exist. The principle will not be elaborated, and the specific testing and bypassing techniques will not be discussed in detail. What can be found on the Internet is enough. This section only talks about finding the experience of SQL injection and methods to improve efficiency.

To get a business system, you must first be familiar with the contracting parameters of this business system. Just as the code audit looks at the installation files and routing first, although the black box test cannot see the code, it can see some parameters that exist in almost every data packet, or most of the data packets. Try SQL injection for these parameters first, so that you can skip the same parameters later. Secondly, try to find some very unobvious parameter points that interact with the database, and don’t miss every data packet.

It is recommended to use the BurpSuite plug-in to link SQLmap, and it is very efficient to send the data packet directly to SQLmap and run it again. The name of the plug-in is ``.

The other is that secondary injection is a point that is easily overlooked, and it must be analyzed in combination with specific business systems.

If there are SQL injection vulnerabilities, and the user has permission to write files and single quotes are not transcoded, you can use select * from table into outfile'/tmp/file' to write files. This attack can be used as an additional technique to obtain one Query result information or write to file, you can execute in the directory of the web server 1 limit 1 into outfile `/var/www/root/test.jsp’ FIELDS ENCLOSED BY `//‘ LINES TERMINATED BY `\ \n<%jsp code here&>’;, in this way, a file is created with the permissions of the MySQL user, which contains the following content

//fiedl value//
<%jsp code here%>

load_file is a function used to read a local file. If the user has file read permission, this function can be used to read the file.

For single quotation marks, MySQL finally has a standard way to bypass single quotation marks and add password like'A%' to get the value of the Password field. You can use ASCII password like 0x4125 or password like char(65, 37).

LDAP test

I have reproduced an LDAP injection vulnerability CVE-2017-14596 a long time ago. At that time, it was to complete the task of the Red Sun Security Code Audit Team, so I did not post it on the blog. It is a pity that the original manuscript is no longer available. arrive. LDAP is a light directory access protocol, and the authentication method in the Windows domain is a type of LDAP. LDAP query has its own unique set of syntax. If you are new to penetration testing, you may not know it well. I will briefly introduce it here.

Many abbreviations are used in the LDAP protocol

  • dn (Distinguish Name) a record of a location, compared to SQL, it is a query statement, which represents a location in LDAP
  • dc(Domain Component) The zone to which a record belongs, the domain name part
  • ou (Organization Unit) The organization to which a record belongs
  • cn(Common Name) username or server name

Suppose a web application uses a search filter

searchfilter="(cn=" + user +")"

From the perspective of URL, the parameter transfer is like this


If we do not enter the user name after user, but use a * instead, it will become in the query code


This becomes a wildcard, the attributes of all users in Huixian City, or some users, and it is necessary to sterilize the execution flow of the application.

There are only two LDAP injections that I have tested, both of which were discovered during user authentication. Use (, \, |, &, * and other characters to test out LDAP injections. I personally recommend using them. Burpsuite’s Intruder function, add LDAP injection test conceit in front of SQL injection, and the result can be measured by fuzzing.


XXE itself requires the server to parse the XML file. During the test, it is often seen that the Accept-Content field in the return package contains XML, but it is not parsed as long as it contains XML. This needs to be tested manually. Sometimes XXE is often found in the file upload location.

When testing XXE, it is best to find a server under the same network segment as the server that accepts out-of-band data. Because most of the XXE are not echoed, you need to view the Web log to obtain out-of-band data information. The development network is isolated from the external network, and out-of-band data cannot be accessed using the external network VPS.

Code injection

Some web function pages allow users to enter code on the web page and trigger the web server to execute the code. In code injection testing, the tester submits inputs, and then the web server accepts these inputs as dynamic codes or include files. The code injection referred to here includes the often-said command injection, which is often found in the web background. Not only web applications, but also code injection features frequently appear on the back-end management pages of some host devices. Unlike the Web, the host device is more about command injection into the terminal Shell, because the host device is often not equipped with a database, but uses the command line for identity authentication or code execution.

The code execution on the web application is more about finding the points where the command may be executed with a keen sense of smell. Here I divide web applications into two major categories. The first category is a large open source CMS, similar to ThinkPHP, WordPress, Laravel, and so on. Most open source CMSs of this type have experienced code audits and vulnerability mining by countless security experts. It is difficult to dig into code injection problems if they are not well-founded. If you encounter this type of secondary open source CMS in your work, you need to focus on the new features that are different from the native CMS, and whether it is appropriate to call the native filter function. From a black box perspective, you need to test the new ones that are different from the original CMS. Function page. The second category is self-developed web applications. The possibility of code injection in these applications is far greater than the first category. The back-end functional code is inferred from the front-end black box point of view, and the points where there may be code injection are found for testing. You need to look for this kind of testing, because you don’t know how peculiar (non-derogatory) the idea of development is.

I think the most efficient way to test code injection is to perform Fuzz testing together with SQL injection and LDAP injection, adding some commands such as sleep, I have to say that Fuzz testing can always bring surprises.

Buffer area out

Strictly speaking, buffer overflows are also counted as injection vulnerabilities, but I think it is difficult to detect buffer overflows by manual testing. Maybe it is my lack of ability to have this idea. In modern SDLC, black-box testing was preceded by white-box code audit, and the buffer area should be stifled in the process of code audit. In the black box test, the buffer overflow vulnerability can also be found through the scanner, and the same is true for DOS.

File Upload

File upload vulnerabilities are very easy to find from the point of view of finding, because the upload function will never appear in a few fixed locations, such as avatar upload, personal data upload, data table upload and so on. But based on my experience, I think the definition of file upload should be broader. Some logs can be written, so if the log file path is exposed, there will be a risk of exploitation. There are also some project management types of applications that can also create work orders. If the work order is stored as a separate file, it can also be classified as a file upload. Some languages, such as CSS, do not require a complete file to conform to the CSS syntax, but only a part of it.











  1. HTTP报头字段排序,不同的Web服务器对字段的排序不同。
  2. 请求不存在的/报错的页面,观察响应。
  3. 通过扫描器,在线工具等进行识别。


robots.txt中罗列了禁止爬虫爬取的目录,测试指南上只写了关于robots.txt文件的存在的信息,我认为这里还可能有更多可以挖掘的存在,其他的敏感文件如果泄露后果也一样致命。前端源代码打包下载这个问题发生过不少次,直接把代码下载下来审计相当于不穿衣服。在前端的JS代码中也可以寻找Cookie的生成规则;注释内容中可能存在硬编码口令;应用测试阶段写在代码的内部IP、邮箱、账号等等信息也可能存在。这些多存在于小公司/小制作,大型的Web应用一般不存在上述这类问题。另外Sitemap、.DS_Store、cross domain.xml等等文件也会暴露敏感信息。

















  • netcraft。在线工具,识别Web服务器以及页面基本信息。What’s that site running?
  • Nmap。扫端口查服务、版本,用的时候对着手册看看参数就行。
  • BurpSuite。没啥好说的,2.0联动sqlmap和Xray挺好用的。
  • WhatWeb。识别Web应用。WhatWeb - Next generation web scanner.
  • BlindElephant。原理是根据静态文件不同版本的校验值不同来识别,所以精准度很高。BlindElephant










  • 发送几个畸形参数,类似于负值,字符等等,检查Debug模式有没有关闭。
  • 继续发送畸形参数,请求不存在的文件,看返回页面不包含报错信息。
  • 看日志,增删改查是不是都在日志上,是不是遵守三员分立原则(admin, audit, user)。
  • 中间件的配置文件,网站的配置文件不能被访问
  • 查看管理员面板中的各种奇奇怪怪功能。







开发完成后上线应用时对敏感文件的清理不彻底,更新之后敏感文件清理不彻底等多种情况会造成敏感文件泄露。可能是服务器配置文件,可能是源代码泄露。测试的时候一是凭借经验,二是靠扫描器扫到,三是偶尔可以从注释里面发现(不过现在扫描器也会顺便扫一下注释内有没有敏感内容),还有就是Google hacking。有的时候虽然删除了敏感文件,但是因为之前存在过导致敏感文件位置会残留在Google hacking数据库里,可以进行同类文件推断。



  • PUT方法允许攻击者上传文件到服务器,最经典的IIS PUT漏洞。
  • DELETE方法允许攻击者删除服务器上的文件。
  • CONNECT允许攻击者将Web服务器作为代理。
  • TRACE一开始被认为无害,后被发现可以被用于跨站跟踪(CST)。







  • 后台注册,不开放注册接口
  • 邀请码注册,例如XssPlatform,t00ls
  • 手机号/邮箱注册
  • 账号密码注册




  • 同一个用户/身份能否多次注册
  • 能否注册多种权限的用户
  • 输入的邮箱/手机是否进行了验证





  • Web应用响应。发送HTTP请求的时候更换uid观察response是否一致,或者说会不会提示用户不存在。响应的范围还可以是报错,404/403/200等等。
  • URI中收集。有些Web应用使用了路由功能,可以从URI中看到。这时可以找到类似于好友列表,关注/粉丝列表,评论区等等出现其他用户的地方进行收集
  • 规则推测。注册多个用户来推测用户名生成的原理,一般都会有时间戳、注册信息的介入。


















  • 是否存在用于文件操作的请求参数
  • 是否存在异常的文件扩展名
  • 是否存在有趣的变量名称
  • 是否可以确定Web应用通过Cookie动态生成页面内容



RPO攻击方式的探究 - FreeBuf网络安全行业门户



















Chrome有个插件叫做Edit This Cookiehttps://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg/related?hl=en-US可以查看Cookie中的属性。以CSDN为例,上面可以修改Cookie对应每一个Key的Value值,下面可以查看Cookie中的几个关键属性是否进行了设置。这里的Cookie是未登陆状态。







CSRF存在于每一个GET/POST请求,JSON格式的数据也可以有CSRF,这里分享我写的另一篇文章JSON情境下的CSRF攻击。使用BurpSuite集成的CSRF POC Generator可以快速的生成测试POC。








  • GET、POST 正常业务支持的方法
  • OPTIONS 查询支持的HTTP方法,在不同目录下执行可能会有不同效果
  • PUT 可以用此请求想服务器上传文件
  • DELETE 可以删除文件
  • TRACE 可以穿越防火墙和代理,回环诊断
  • CONNECT CONNECT方法是HTTP/1.1协议预留的,能够将连接改为管道方式的代理服务器。通常用于SSL加密服务器的链接与非加密的HTTP代理服务器的通信。



多个HTTP参数使用相同的名称可能导致应用以不可预料的方式运行。该漏洞曾经对ModSecurity SQL注入的核心规则库造成了影响。ModSecurity过滤器可以正常过滤字符串select 1,2,3 from table,所以当次字符串出现在URL中进行GET请求查询数据库的时候会被过滤。但是攻击者可以让ModSecurity过滤器接受多个同名输入,可以构造这样的URLhttp://domain/?query=select 1&query=2,3 from table不会出发过滤器,但是在应用层可以组成完整的SQL查询语句。








如果存在SQL注入漏洞,并且用户拥有写文件权限并且单引号不被转码,可以使用select * from table into outfile '/tmp/file'写文件,这种攻击可以作为一个额外技术,获取一个查询的结果信息或写入文件,可以在Web服务器的目录执行1 limit 1 into outfile `/var/www/root/test.jsp’ FIELDS ENCLOSED BY `//‘ LINES TERMINATED BY `\n<%jsp code here&>’;,这样就使用MySQL用户的权限创建了一个文件,里面包含以下内容

//fiedl value//
<%jsp code here%>


对于单引号,MySQL终有一个标准的方式绕过单引号,加入想获得Password字段的值password like 'A%',可以使用ASCIIpassword like 0x4125,也可以password like char(65,37)




  • dn(Distinguish Name) 一条位置的记录,对比SQL来说就是一条查询语句,在LDAP中表示一个位置
  • dc(Domain Component) 一条记录所属的区域,域名部分
  • ou(Organization Unit) 一条记录所属的组织
  • cn(Common Name) 用户名或者服务器名


searchfilter="(cn=" + user +")"






我本人测到的LDAP注入也仅仅只有两个,都是在进行用户身份认证的时候发现的,使用(、\、| 、& 、*等等字符可以测试出LDAP注入。我个人建议使用Burpsuite的Intruder功能,在SQL注入前面加上LDAP注入的测试自负,fuzz一下就可以测出结果。