python crawler (III) with a pull hook web crawling request jobs

request.Request类

If you want to add a request header (increase request header reason is that, if not request header, then we climb made, it may be limited to) at the time of the request, then you must use request.Request classes to implement, such as to add a User-Agent,

url='https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput='
headers = {
     'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0'
}

req=request.Request(url,headers=headers)
resp=request.urlopen(req)
print(resp.read())

So that you can climb all have to take down the information this site:

 

 Pull hook net Anti reptile was designed very well, we are now in the open page:

 

 We just have to climb this page just get information, which is not too have jobs, these jobs have a jsp In addition, the form was displayed by calling on this page

We have to get jobs website

 

 

 

 The POST request method;

url='https://www.lagou.com/jobs/positionAjax.json?needAddtionalResult=false'
headers = {
     'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0'
}

data ={
     'first':'true',
     'pn':1,
     'kd':'python'
}
req=request.Request(url,headers=headers,data=data,method='POST')
resp=request.urlopen(req)
print(resp.read())

结果为:

 

 报错得原因是data也需要urlencode来传,同时也要是bytes得形式(encode('utf-8'))

还需要对请求头再次进行伪装,此时得请求头为:

 
 
headers = {
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Referer': 'https://www.lagou.com/jobs/list_%E8%BF%90%E7%BB%B4?city=%E6%88%90%E9%83%BD&cl=false&fromSearch=true&labelWords=&suginput=',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'
}
 

所以请求头就是在网站里右键,点击查看元素,然后选择网络,选择User-AgentReferer里面得网址

 

 

url='https://www.lagou.com/jobs/positionAjax.json?needAddtionalResult=false'
headers = {
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Referer': 'https://www.lagou.com/jobs/list_%E8%BF%90%E7%BB%B4?city=%E6%88%90%E9%83%BD&cl=false&fromSearch=true&labelWords=&suginput=',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'
}
data ={ 'first':'true', 'pn':1, 'kd':'python' } 
req
=request.Request(url,headers=headers,data=parse.urlencode(data).encode('utf-8'),method='POST')
resp
=request.urlopen(req)

print(resp.read().decode('utf-8'))

这时会出现“您的操作太频繁,请稍后重试”的提示,是因为网站已经发现了有人正在爬取而进行的提示。

我们在代码中添加与post和相关的cookie来请求

例如:爬取成都与运维相关的工作

import requests
import time
import json


def main():
    url_start = "https://www.lagou.com/jobs/list_运维?city=%E6%88%90%E9%83%BD&cl=false&fromSearch=true&labelWords=&suginput="
    url_parse = "https://www.lagou.com/jobs/positionAjax.json?city=成都&needAddtionalResult=false"
    headers = {
        'Accept': 'application/json, text/javascript, */*; q=0.01',
        'Referer': 'https://www.lagou.com/jobs/list_%E8%BF%90%E7%BB%B4?city=%E6%88%90%E9%83%BD&cl=false&fromSearch=true&labelWords=&suginput=',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'
    }
    for x in range(1, 5):
        data = {
            'first': 'true',
            'pn': str(x),
            'kd': '运维'
                }
        s = requests.Session()
        s.get(url_start, headers=headers, timeout=3)  # 请求首页获取cookies
        cookie = s.cookies  # 为此次获取的cookies
        response = s.post(url_parse, data=data, headers=headers, cookies=cookie, timeout=3)  # 获取此次文本
        time.sleep(5)
        response.encoding = response.apparent_encoding
        text = json.loads(response.text)
        info = text["content"]["positionResult"]["result"]
        for i in info:
            print(i["companyFullName"])
            companyFullName = i["companyFullName"]
            print(i["positionName"])
            positionName = i["positionName"]
            print(i["salary"])
            salary = i["salary"]
            print(i["companySize"])
            companySize = i["companySize"]
            print(i["skillLables"])
            skillLables = i["skillLables"]
            print(i["createTime"])
            createTime = i["createTime"]
            print(i["district"])
            district = i["district"]
            print(i["stationname"])
            stationname = i["stationname"]

if __name__ == '__main__':
 main()

 

Guess you like

Origin www.cnblogs.com/zhaoxinhui/p/12359043.html