Automated penetration testing && automated mining of src

foreword

It’s been a long time since I wrote a blog. I’ve been working at work recently, and recently took the time to study automated penetration testing. Let me share it below.

train of thought

asset collection

The traditional collection of assets is linked to the domain name, and the quality of the domain name collection is also very important. Here I used the ICP record query tool
ICP-Checker
of the following brother . The general usage method is as follows
. For example, our target is Tencent, we can Obtain the name of the organizer
tencent.com for the record through the record query
Please add a picture description

Then we use the tool and enter the complete company name, he will dump all the root domain names registered by the company and save them in the filing information
Please add a picture description
. Yes, then we copy the domain name and save it as xxx.domain
Please add a picture description
Since the domain name may be repeated, we can use the following script to remove the duplicate, modify the file_name and run it

import os
file_name = "xxx.domain"
os.system(f"cat {file_name}|sort|uniq -u >test.txt")
os.system(f"mv test.txt {file_name}")

Then the total root domain name is collected

asset collection

Here I have many methods. Today I will introduce a relatively fast method suitable for rich people, which is to directly use the API of fofa

# -*- coding: utf-8 -*-
import requests
import base64
import time
import os
email = 
apikey = 
for file in os.listdir('.'):
    if '.domain' in file:
        domain_file = file
        break


def get_fofa_result(query_str, fields="", page=1, size=100):
    payload = base64.b64encode(query_str.encode()).decode()
    url = f'https://fofa.info/api/v1/search/all?email={email}&apikey={apikey}&qbase64={payload}&fields={fields}&page={page}&size={size}'
    for i in range(1, 1000, 1):
        res = requests.get(url)
        if res.json().get('error') == True:
            if 'F点余额不' in res.json().get('errmsg'):
                return ""
            else:
                time.sleep(i)
        else:
            return res.json()


def get_total_count(query_str):
    return get_fofa_result(query_str)['size']


def save_result(query_str, total_count):
    step = max(1000, total_count//50)
    step = min(10000, step)  # 防止过大
    for i in range(0, total_count, step):
        res = get_fofa_result(query_str, "protocol,host", 1+i//step, step)
        if res == "":
            print(query_str+" is too large,and can not download")
            return
        for protocol, host in res['results']:
            if protocol == "http" or protocol == "https":
                if protocol == "http":
                    host = "http://"+host
                with open(domain_file.replace("domain", "url"), "a") as f:
                    f.write(host+"\n")
            else:
                with open(domain_file.replace("domain", "service"), "a") as f:
                    f.write(protocol+" "+host+"\n")


with open(domain_file, "r") as f:
    for domain in f.read().split():
        query_str = f'domain="{domain}"'
        count = get_total_count(query_str)
        print(domain)
        if count == 0:
            continue
        else:
            save_result(query_str, count)


What I write here is relatively simple. Due to the limitation of the number of APIs, there may be errors, and then you can handle it yourself in get_fofa_result. Just run it.
Please add a picture description
The test.url contains the url that can be accessed. The service is some other services except the http service. Here I mainly automate the use of the http service.
Please add a picture description

HTTP service automatic attack exploit

Here we mainly use nuclie and fscan for vulnerability detection, and we can also add xray, but xray has a lot of false positives, and it will generate some strange config when running, so I didn’t add it. At the same time, I configured notify and dingding robots, Convenient direct synchronization of results

import os
import requests
from threading import Thread
for file in os.listdir('.'):
    if '.url' in file:
        url_file = file
        break


def nuclei():
    os.system(
        f"nuclei  -stats -et ssl/weak-cipher-suites.yaml  -l {url_file} -rl 1000 -bs 35 -c 50  -mhe 10 -ni -o res-tmp.txt  -severity critical,medium,high | notify -silent")


def fscan():
    os.system(f"fscan  -uf {url_file}")
    if os.path.exists("result.txt") == False:
        os.system("echo 'fscan do not scan any inforamtion'")
        return
    os.system(
        'cat result.txt|grep "\[+\]"|grep -v "\[KONA\]"|grep -v "\[Varnish\]"|grep -v  "\[Cloudfront\]"|grep -v "\[CloudFlare\]" >sucess.txt')
    if int(os.popen("cat sucess.txt|wc -l").read()) > 0:
        with open("sucess.txt", "r") as f:
            for line in f.read().split("\n"):
                if line == "":
                    continue
                content = {"msgtype": "text", "text": {"content": line}}
                requests.post(
                    'https://oapi.dingtalk.com/robot/send?access_token=aaaa', json=content)


t1 = Thread(target=nuclei)
t2 = Thread(target=fscan)
t1.start()
t2.start()
t1.join()
t2.join()

Then we can run it like this,
just run python3 attack.py directly, here I have too few urls, there will be no results
Please add a picture description

I also made a whole set of automated utilization before. At that time, I was digging h1. The degree of automation was relatively high, and I basically didn’t need to do it myself. Of course, domain name collection including assets is still relatively complete, so I will share this first~

Of course, automatic digging of src cannot dig out high-risk vulnerabilities, and the high-risk ones still need to actually test the business. This is just a picture.

Guess you like

Origin blog.csdn.net/azraelxuemo/article/details/130565438