Python crawler uses requests module to make a simple web page collector

 

          First, let’s talk about the coding process of the requests module (4 steps):-Specify URL-Initiate a request Get or Post-Get response data-Store

          Next, we introduce UA (User-Agent) detection and camouflage.

          1. UA detection: The portal server will detect the carrier identity of the corresponding request. If the identity of the requested carrier is detected as a browser, it means that the request is a normal request. However, if the requested carrier is detected If the identity is not based on a certain browser, it means that the request is an abnormal request (crawler), and the server is likely to reject the request.

          2.UA disguise: Let the crawler's corresponding request carrier identity disguise as a certain browser

          Next is our actual code part:

import requests
if __name__=='__main__':
    #UA伪装:将对应的User-Agent封装到一个字典中
    headers = {
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3947.100 Safari/537.36'
    }
    url = 'https://www.sogou.com/web'
    #动态的 需要对url携带的参数:封装到字典中
    kw = input('enter a  word:')
    params = {
        'query':kw
    }
    response = requests.get(url=url,params=params,headers=headers)
    para_text = response.text
    fileName = kw +'.html'
    with open(fileName,'w',encoding='utf-8') as  fp:
        fp.write(para_text)
    print(fileName,'保存成功!!')

 

Briefly introduce with open (file storage location, file reading format, file encoding format) as fp: syntax

 

Guess you like

Origin blog.csdn.net/qwerty1372431588/article/details/105887841