request库

版权声明:CopyRight @CSDN 码农Robin https://blog.csdn.net/weixin_41423450/article/details/89011797

request库

requests是用python语言基于urllib编写的,采用的是Apache2 Licensed开源协议的HTTP库。

requests是python实现的最简单易用的HTTP库,建议爬虫使用requests库。

1、总体功能演示:

eg1:

import requests

response  = requests.get("https://www.baidu.com")
print(type(response))
print(response.status_code)
print(type(response.text))
print(response.text)
print(response.cookies)
print(response.content)
print(response.content.decode("utf-8"))

很多情况下的网站如果直接response.text会出现乱码的问题,所以这个使用response.content。这样返回的数据格式其实是二进制格式,然后通过decode()转换为utf-8,这样就解决了通过response.text直接返回显示乱码的问题。

请求发出后,Requests 会基于 HTTP 头部对响应的编码作出有根据的推测。当你访问 response.text 之时,Requests 会使用其推测的文本编码。你可以找出 Requests 使用了什么编码,并且能够使用 response.encoding 属性来改变它。

eg2:

response =requests.get("http://www.baidu.com")
response.encoding="utf-8"
print(response.text)

不管是通过response.content.decode("utf-8)的方式还是通过response.encoding="utf-8"的方式都可以避免乱码的问题。

2、get请求

基本get请求

eg3:

import requests

response = requests.get('http://httpbin.org/get')
print(response.text) 

带参数get请求

eg4:

import requests

response = requests.get("http://httpbin.org/get?name=zhaofan&age=23")
print(response.text)

如果我们想要在URL查询字符串传递数据,通常我们会通过httpbin.org/get?key=val方式传递。Requests模块允许使用params关键字传递参数,以一个字典来传递这些参数

[注意:第二种方式通过字典的方式的时候,如果字典中的参数为None则不会添加到url上]

eg5:

import requests
data = {
    "name":"zhaofan",
    "age":22
}
response = requests.get("http://httpbin.org/get",params=data)
print(response.url)
print(response.text)

3、解析json

eg6:

import requests
import json

response = requests.get("http://httpbin.org/get")
print(type(response.text))
print(response.json())
print(json.loads(response.text))
print(type(response.json()))    

4、获取二进制数据

在上面提到了response.content,这样获取的数据是二进制数据,同样的这个方法也可以用于下载图片以及视频资源。

5、添加headers

和前面urllib模块一样,我们同样可以定制headers的信息,如当我们直接通过requests请求知乎网站的时,默认是无法访问的。

eg7:

import requests

response =requests.get("https://www.zhihu.com")
print(response.text)

>>>    
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>openresty</center>
</body>
</html> 

因为访问知乎需要头部信息,这个时候我们在谷歌浏览器里输入chrome://version,就可以看到用户代理,将用户代理添加到头部信息。

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36

eg8:

import requests
headers = {

    "User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36"
}
response =requests.get("https://www.zhihu.com",headers=headers)

print(response.text)        

6、post请求

通过在发送post请求时添加一个data参数,这个data参数可以通过字典构造成,这样对于发送post请求就非常方便。

eg9:

import requests

data = {
    "name":"Robin",
    "age":25
}
response = requests.post("http://httpbin.org/post",data=data)
print(response.text)

同样的在发送post请求的时候也可以和发送get请求一样通过headers参数传递一个字典类型的数据

7、响应

我们可以通过response获得很多属性。

eg10:

import requests

response = requests.get("http://www.baidu.com")
print(type(response.status_code),response.status_code)
print(type(response.headers),response.headers)
print(type(response.cookies),response.cookies)
print(type(response.url),response.url)
print(type(response.history),response.history)

8、文件上传

实现方法和其他参数类似,也是构造一个字典然后通过files参数传递。

eg11:

import requests
files= {"files":open("test.txt","rb")}
response = requests.post("http://httpbin.org/post",files=files)
print(response.text)    

>>>
{
"args": {}, 
"data": "", 
"files": {
"files": "#LWP-Cookies-2.0\r\nSet-Cookie3: BAIDUID=\"146326AB70DACE0A508C0EB59A0CC349:FG=1\"; path=\"/\"; domain=\".baidu.com\"; path_spec; domain_dot; expires=\"2087-04-08 12:25:10Z\"; version=0\r\nSet-Cookie3: BIDUPSID=146326AB70DACE0A508C0EB59A0CC349; path=\"/\"; domain=\".baidu.com\"; path_spec; domain_dot; expires=\"2087-04-08 12:25:10Z\"; version=0\r\nSet-Cookie3: H_PS_PSSID=1443_21119_28720_28557_28697_28584_28518_28627_28606; path=\"/\"; domain=\".baidu.com\"; path_spec; domain_dot; discard; version=0\r\nSet-Cookie3: PSTM=1553159467; path=\"/\"; domain=\".baidu.com\"; path_spec; domain_dot; expires=\"2087-04-08 12:25:10Z\"; version=0\r\nSet-Cookie3: delPer=0; path=\"/\"; domain=\".baidu.com\"; path_spec; domain_dot; discard; version=0\r\nSet-Cookie3: BDSVRTM=0; path=\"/\"; domain=\"www.baidu.com\"; path_spec; discard; version=0\r\nSet-Cookie3: BD_HOME=0; path=\"/\"; domain=\"www.baidu.com\"; path_spec; discard; version=0\r\n"
}, 
"form": {}, 
"headers": {
"Accept": "*/*", 
"Accept-Encoding": "gzip, deflate", 
"Content-Length": "1029", 
"Content-Type": "multipart/form-data; boundary=5a90f1701e3712233a6261b4268f1d43", 
"Host": "httpbin.org", 
"User-Agent": "python-requests/2.19.1"
}, 
"json": null, 
"origin": "113.89.239.24, 113.89.239.24", 
"url": "https://httpbin.org/post"
}

9、cookie

获取cookie

import requests

response = requests.get("http://www.baidu.com")
print(response.cookies)

for key,value in response.cookies.items():
    print(key+"="+value)    

会话维持

cookie的一个作用就是可以用于模拟登陆,做会话维持。

import requests
s = requests.Session()  #创建一个session对象
s.get("http://httpbin.org/cookies/set/number/123456") #请求网址1
response = s.get("http://httpbin.org/cookies")  #请求网址2,这时同一域名下,用的是同一个session
print(response.text)

证书验证

现在的很多网站都是https的方式访问,所以这个时候就涉及到证书的问题。

import requests

response = requests.get("https:/www.12306.cn")
print(response.status_code)
    
>>>
Traceback (most recent call last):
File "E:/python/spiders/2request.py", line 12, in <module>
response = requests.get("https:/www.12306.cn")
File "D:\anaconda\lib\site-packages\requests\api.py", line 72, in get
return request('get', url, params=params, **kwargs)
File "D:\anaconda\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "D:\anaconda\lib\site-packages\requests\sessions.py", line 498, in request
prep = self.prepare_request(req)
File "D:\anaconda\lib\site-packages\requests\sessions.py", line 441, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "D:\anaconda\lib\site-packages\requests\models.py", line 309, in prepare
self.prepare_url(url, params)
File "D:\anaconda\lib\site-packages\requests\models.py", line 383, in prepare_url
raise MissingSchema(error)
requests.exceptions.MissingSchema: Invalid URL 'https:/www.12306.cn': No schema supplied. Perhaps you meant http://https:/www.12306.cn?

为了避免这种情况的发生可以通过verify=False,但是这样是可以访问到页面,但是会提示:

InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning)

解决方法为:

import requests

from requests.packages import urllib3
urllib3.disable_warnings()
response = requests.get("https://www.12306.cn",verify=False)
print(response.status_code)        

这样就不会提示警告信息,当然也可以通过cert参数放入证书路径。

代理设置

import requests

proxies= {
    "http":"http://127.0.0.1:9999",
    "https":"http://127.0.0.1:8888"
}
response  = requests.get("https://www.baidu.com",proxies=proxies)
print(response.text)

如果代理需要设置账户名和密码,只需要将字典更改为如下:

proxies = {
"http":"http://user:[email protected]:9999"
}    

超时设置

通过timeout参数可以设置超时的时间

认证设置

如果碰到需要认证的网站可以通过requests.auth模块实现

import requests

from requests.auth import HTTPBasicAuth

response = requests.get("http://120.27.34.24:9001/",auth=HTTPBasicAuth("user","123"))
print(response.status_code)    

当然这里还有一种方式

import requests

response = requests.get("http://120.27.34.24:9001/",auth=("user","123"))
print(response.status_code)    

异常处理

所有的异常都是在requests.excepitons中,关于reqeusts的异常在这里可以看到详细内容:

http://www.python-requests.org/en/master/api/#exceptions

RequestException继承IOError,HTTPError,ConnectionError,Timeout继承RequestionException,ProxyError,SSLError继承ConnectionError,ReadTimeout继承Timeout异常。

异常继承关系,详细的可以看:

http://cn.python-requests.org/zh_CN/latest/_modules/requests/exceptions.html#RequestException

猜你喜欢

转载自blog.csdn.net/weixin_41423450/article/details/89011797