Python crawler --requests module (1)

- Expand the requests module is based on the 5-point learning

  • What's requests Module
    • Module requests based on network request module python natively, the main role is to simulate a browser initiating a request. Powerful, simple and efficient usage. Occupy half of the position in the field of reptiles.
  • Why use requests module
    • Because when using urllib module, there will be a lot of inconvenience, summarized as follows:
      • Manual processing encoded url
      • Post requests manual processing parameters
      • Cookie and proxy processing complex operation
      • ......
    • Use requests Modules:
      • Automatic processing url encoded
      • Automatic post processing request parameters
      • Simplify cookie and proxy operation
      • ......
  • How to use the module requests
    • installation:
      • pip install requests
    • manual
      • Specify the url
      • Based on requests initiation request module
      • Retrieve data values ​​from the response object
      • Persistent storage
  • Learn the module by 5 reptiles projects based on requests and consolidation module
    • Based on requests get request module
      • Requirements: crawling dog search page data specified search terms
    • Post requests based on the request module
      • Demand: Log IMDb crawling a page after a successful login data
    • Get requests based on a request module ajax
    • post request module requests based ajax
    • Combined training
      • Requirements: crawling State Administration Drug Administration based cosmetic People's Republic of China production license data http://125.35.6.84:81/xk/

- code shows

  • Requirements: crawling dog search page data specified search terms
    import requests
    import os
    #指定搜索关键字 word = input('enter a word you want to search:') #自定义请求头信息 headers={ 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36', } #指定url url = 'https://www.sogou.com/web' #封装get请求参数 prams = { 'query':word, 'ie':'utf-8' } #发起请求 response = requests.get(url=url,params=param) #获取响应数据 page_text = response.text with open('./sougou.html','w',encoding='utf-8') as fp: fp.write(page_text) 

    Request the identity of the carrier camouflage identified:

    • User-Agent: carrier requesting carrier identity, initiated by the browser request, request support for the browser, User-Agent, the request for the identity of the browser, use crawlers to initiate the request, the request for reptiles program, the User-Agent request to identify the identity of the crawler. The request may be learned by the carrier determines whether the value is based on which browser, or reptiles.

    • Anti-climbing mechanism: Some portals will access the User-Agent request to the site of capture and determine if the request is UA crawlers, the refusal to provide data to the request.

    • Anti-anti-climb policy: UA crawler disguise the identity of a particular browser.

  • Demand: Log IMDb crawling a page after a successful login data
    import requests
    import os
    url = 'https://accounts.douban.com/login' #封装请求参数 data = { "source": "movie", "redir": "https://movie.douban.com/", "form_email": "15027900535", "form_password": "bobo@15027900535", "login": "登录", } #自定义请求头信息 headers={ 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36', } response = requests.post(url=url,data=data) page_text = response.text with open('./douban111.html','w',encoding='utf-8') as fp: fp.write(page_text) 

     

Requirements: crawling IMDb Categories Top  https://movie.douban.com/ movie details data

#!/usr/bin/env python
# -*- coding:utf-8 -*-

import requests import urllib.request if __name__ == "__main__": #指定ajax-get请求的url(通过抓包进行获取) url = 'https://movie.douban.com/j/chart/top_list?' #定制请求头信息,相关的头信息必须封装在字典结构中 headers = { #定制请求头中的User-Agent参数,当然也可以定制请求头中其他的参数 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36', } #定制get请求携带的参数(从抓包工具中获取) param = { 'type':'5', 'interval_id':'100:90', 'action':'', 'start':'0', 'limit':'20' } #发起get请求,获取响应对象 response = requests.get(url=url,headers=headers,params=param) #获取响应内容:响应内容为json串 print(response.text)

Requirements: KFC restaurant crawling query http://www.kfc.com.cn/kfccda/index.aspx restaurant data in specified locations

#!/usr/bin/env python
# -*- coding:utf-8 -*-

import requests import urllib.request if __name__ == "__main__": #指定ajax-post请求的url(通过抓包进行获取) url = 'http://www.kfc.com.cn/kfccda/ashx/GetStoreList.ashx?op=keyword' #定制请求头信息,相关的头信息必须封装在字典结构中 headers = { #定制请求头中的User-Agent参数,当然也可以定制请求头中其他的参数 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36', } #定制post请求携带的参数(从抓包工具中获取) data = { 'cname':'', 'pid':'', 'keyword':'北京', 'pageIndex': '1', 'pageSize': '10' } #发起post请求,获取响应对象 response = requests.get(url=url,headers=headers,data=data) #获取响应内容:响应内容为json串 print(response.text)
  • Requirements: crawling State Administration Drug Administration based on data related to license production of cosmetics People's Republic of China
    import requests
    from fake_useragent import UserAgent ua = UserAgent(use_cache_server=False,verify_ssl=False).random headers = { 'User-Agent':ua } url = 'http://125.35.6.84:81/xk/itownet/portalAction.do?method=getXkzsList' pageNum = 3 for page in range(3,5): data = { 'on': 'true', 'page': str(page), 'pageSize': '15', 'productName':'', 'conditionType': '1', 'applyname':'', 'applysn':'' } json_text = requests.post(url=url,data=data,headers=headers).json() all_id_list = [] for dict in json_text['list']: id = dict['ID']#用于二级页面数据获取 #下列详情信息可以在二级页面中获取 # name = dict['EPS_NAME'] # product = dict['PRODUCT_SN'] # man_name = dict['QF_MANAGER_NAME'] # d1 = dict['XC_DATE'] # d2 = dict['XK_DATE'] all_id_list.append(id) #该url是一个ajax的post请求 post_url = 'http://125.35.6.84:81/xk/itownet/portalAction.do?method=getXkzsById' for id in all_id_list: post_data = { 'id':id } response = requests.post(url=post_url,data=post_data,headers=headers) #该请求响应回来的数据有两个,一个是基于text,一个是基于json的,所以可以根据content-type,来获取指定的响应数据 if response.headers['Content-Type'] == 'application/json;charset=UTF-8': #print(response.json()) #进行json解析 json_text = response.json() print(json_text['businessPerson'])

Guess you like

Origin www.cnblogs.com/bilx/p/11541784.html