爬虫案例—中基协数据爬取

 因为工作原因,需要爬取相关网站的数据,包括中基协网站和天眼查部分数据。


一、中基协网站

爬取思路:

1.查看目标页:http://gs.amac.org.cn/amac-infodisc/api/pof/manager?rand=0.9775162173180119&page=%s&size=50

 发现有随机数字串(刷新反爬措施),以及页码和每页信息条数,可以用来拼接爬取url

用一个循环爬取所有展示页面,用到requests库以及random函数生成随机数

返回的是json数据,直接用request的json函数解析数据。

2.save函数用来保存目标页面的详细数据,可根据需要爬取。

 1 import requests
 2 import random
 3 import json
 4  
 5 def save(school_datas):
 6     for data1 in school_datas:
 7         # print(data)
 8         id = data1['id']
 9         managerName = data1['managerName']
10         artificialPersonName = data1['artificialPersonName']
11         regAdrAgg = data1['regAdrAgg']
12         registerNo = data1['registerNo']
13         print(id, managerName, artificialPersonName, regAdrAgg,registerNo)
14  
15 for i in range(0, 427):
16     print("第%s页====================="%str(i))
17     header={
18             'Accept':'application/json, text/javascript, */*; q=0.01',
19             'Accept-Encoding':'gzip, deflate',
20             'Connection':'keep-alive',
21             'Host':'gs.amac.org.cn',
22             'Origin':'http://gs.amac.org.cn',
23             'Referer':'http://gs.amac.org.cn/amac-infodisc/res/pof/manager/managerList.html',
24             'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'
25             }
26     r=random.random()
27     print(str(r))
28     
30     # json={"rand":'0.0045470034372876444',"page":str(i),"size":"50"}
31     # http://gs.amac.org.cn/amac-infodisc/api/pof/manager?rand=0.9775162173180119&page=1&size=50
32     # data= requests.post("http://gs.amac.org.cn/amac-infodisc/api/pof/manager",json={'rand':str(r),'page':str(i),'size':'50'},headers=header)#.json()
33     url="http://gs.amac.org.cn/amac-infodisc/api/pof/manager?rand=0.9775162173180119&page=%s&size=50"
34     data= requests.post(url%i,json={'rand':str(r),'page':str(i),'size':'50'}).json()
35   
40     
41     # print (type(r))
42     # print (r.status_code)
43 
45     # print (r.cookies)
46     # print(r.text,"\n")
47     # print(r.json())
48 
55 
56     print("每一页信息条数——>", len(data['content']))
57     print("全部信息条数——>", data["totalElements"])
58     print("每页有——>", data["size"])
59     print("总页数-->>", data["totalPages"])
60 
61     school_datas = data["content"]
62     save(school_datas)

猜你喜欢

转载自www.cnblogs.com/daliner/p/10145040.html
今日推荐