src vulnerability mining --> the method that must dig out the loopholes (it is recommended to read while learning src mining)

  • I still remember that when I finished learning the most basic loopholes, I couldn't restrain my inner excitement, thinking that I could finally show my skills.
    I once remembered that a certain boss said that "the essence of penetration testing is information collection", so what should we collect in information collection?

  • Pick up a website and scan it with sqlmap when you see the input box, and enter the xss payload when you see the search box. It is conceivable that you get nothing. I believe this is also the state of most beginners. Although I have learned the basic types of loopholes, I can’t find any loopholes when I pick up the website for actual combat. Master saw my virtue

He warned me that "the essence of penetration testing is information collection, and real masters spend 80% of their time on information collection." Instantly understood, the difference between me and masters is this 80% of time.

This shows how important information collection is, so what exactly should we collect information? Let me take the collection of information on the website of a certain university's academic affairs office as an example to talk about my understanding of information collection as a rookie.

The information displayed on the website
We open the homepage of the website of the Academic Affairs Office and list all the important elements in it.

URL

QR code

verification code

registered user

forget the password

Web page source code

user name and password

We need to collect information on all the above elements.

URL

We need to collect the second-level domain name, ip, C segment, and side stations under this domain name. We can use tools to collect them. Common tools include Layer, webroot, Yujian, etc. You can also use online query websites such as webscan (http:// www.webscan.

Guess you like

Origin blog.csdn.net/qq_53577336/article/details/123627998