爬虫基本请求库的使用(二)requests

requests的get方法:

import requests

r = requests.get('https://www.baidu.com/')
print(type(r))
print(r.status_code)
print(type(r.text))
print(r.text)
print(r.cookies)

分别输出了Response的类型 状态码 响应体的类型 内容以及Cookies
其他类型的请求:

r = requests.post('http://httpbin.org/post')
r = requests.put('http://httpbin.org/put')
r = requests.delete('http://httpbin.org/delete')
r = requests.head('http://httpbin.org/get')
r = requests.options('http://httpbin.org/get')

GET请求:
构建get请求方法进行请求数据

import requests

r = requests.get('http://httpbin.org/get')
print(r.text)

添加参数的时候(r = requests.get(‘http://httpbin.org/get?name=germey&age=22’))

构造链接的方法:

import requests

data = {
    'name': 'germey',
    'age': 22
}
r = requests.get("http://httpbin.org/get", params=data)
print(r.text)

可以调用json方法使直接返回的是一个JSON格式的数据

import requests

r = requests.get("http://httpbin.org/get")
print(type(r.text))
print(r.json())
print(type(r.json()))

抓取网页:
抓取知乎网页为例,用到了正则表达式的方法
这里添加了爬虫请求头,用于伪装

import requests
import re

headers = {
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
}
r = requests.get("https://www.zhihu.com/explore", headers=headers)
pattern = re.compile('explore-feed.*?question_link.*?>(.*?)</a>', re.S)
titles = re.findall(pattern, r.text)
print(titles)

抓取二进制数据:
图片 音频 视频
下面是输出txt数据和二进制数据时候的不同方法

import requests

r = requests.get("https://gratisography.com/thumbnails/gratisography-booze-life-thumbnail-small.jpg")
print(r.text)
print(r.content)

保存的方法:
会自动下载到该文件的同级目录下

import requests

r = requests.get("https://gratisography.com/thumbnails/gratisography-booze-life-thumbnail-small.jpg")
with open('favicon.jpg', 'wb') as f:
    f.write(r.content)

POST请求:
另外一种常见的请求方式

import requests

data = {'name': 'germey', 'age': '22'}
r = requests.post("http://httpbin.org/post", data=data)
print(r.text)

响应:
获取各种响应的信息

import requests

r = requests.get('http://www.jianshu.com')
print(type(r.status_code), r.status_code)
print(type(r.headers), r.headers)
print(type(r.cookies), r.cookies)
print(type(r.url), r.url)
print(type(r.history), r.history)

分别打印 响应状态码 响应头信息 响应的cookies 响应的url 响应历史

内置判断响应是否成功的方法:

import requests

r = requests.get('http://www.jianshu.com')
exit() if not r.status_code == requests.codes.ok else print('Request Successfully')

通过比较返回码和内置的成功的返回码来得到正常的响应

高级用法:

文件上传:
requests可以模拟提交一下数据(文件也可以)

import requests

files = {'file': open('favicon.jpg', 'rb')}
r = requests.post('http://httpbin.org/post', files=files)
print(r.text)

cookies:
获取和设置只需要一步就可以完成

import requests

r = requests.get('https://www.baidu.com')
print(r.cookies)
for key, value in r.cookies.items():
    print(key + '=' + value)

保持登录状态的例子(将cookies设置为自己获取到的cookies):

import requests

headers = {
    'Cookie': 'q_c1=31653b264a074fc9a57816d1ea93ed8b|1474273938000|1474273938000; d_c0="AGDAs254kAqPTr6NW1U3XTLFzKhMPQ6H_nc=|1474273938"; __utmv=51854390.100-1|2=registration_date=20130902=1^3=entry_date=20130902=1;a_t="2.0AACAfbwdAAAXAAAAso0QWAAAgH28HQAAAGDAs254kAoXAAAAYQJVTQ4FCVgA360us8BAklzLYNEHUd6kmHtRQX5a6hiZxKCynnycerLQ3gIkoJLOCQ==";z_c0=Mi4wQUFDQWZid2RBQUFBWU1DemJuaVFDaGNBQUFCaEFsVk5EZ1VKV0FEZnJTNnp3RUNTWE10ZzBRZFIzcVNZZTFGQmZn|1474887858|64b4d4234a21de774c42c837fe0b672fdb5763b0',
    'Host': 'www.zhihu.com',
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36',
}
r = requests.get('https://www.zhihu.com', headers=headers)
print(r.text)

另外一种方法是通过cookies参数设置,不过代码繁琐,这里就不列出

会话维持:
Session方法

import requests

s = requests.Session()
s.get('http://httpbin.org/cookies/set/number/123456789')
r = s.get('http://httpbin.org/cookies')
print(r.text)

SSL证书验证:
ps:这里12306的证书应该已经通过了证书检验,下面是一个demo

import requests

response = requests.get('https://www.12306.cn')
print(response.status_code)
'''
可以通过设置的方法来忽略警告
'''
import requests
from requests.packages import urllib3

urllib3.disable_warnings()
response = requests.get('https://www.12306.cn', verify=False)
print(response.status_code)

代理设置:
大规模爬取的时候可能会封掉客户端ip(下面是demo,设置的代理当然是无效的)

import requests

proxies = {
  'http': 'http://10.10.1.10:3128',
  'https': 'http://10.10.1.10:1080',
}

requests.get('https://www.taobao.com', proxies=proxies)

需使用HTTP Basci Auth的时候

import requests

proxies = {
    'https': 'http://user:[email protected]:3128/',
}
requests.get('https://www.taobao.com', proxies=proxies)

还支持SOCKS协议的代理

import requests

proxies = {
    'http': 'socks5://user:password@host:port',
    'https': 'socks5://user:password@host:port'
}
requests.get('https://www.taobao.com', proxies=proxies)

超时设置:
为了防止不能及时响应

import requests

r = requests.get('https://www.taobao.com', timeout=1)
print(r.status_code)

通过这样的方法可以实现

访问淘宝的时候可以设置永久等待

import requests

r = requests.get("https://www.taobao.com", timeout = 0.01)

print(r.status_code)

服务器永久等待可以将timeout设置为None

身份认证:

import requests
from requests.auth import HTTPBasicAuth

r = requests.get("http://localhost:5000", auth=HTTPBasicAuth('username', 'password'))

print(r.status_code)

认证成功则会200状态码
更简单的:

import requests

r = requests.get("http://localhost:5000", auth=('username', 'password'))

print(r.status_code)

OAuth 认证

import requests
from requests_oauthlib import OAuth1


url = 'https://api.twitter.com/1.1/account/verify_credentials.json'
auth = OAuth1('YOUR_APP_KEY', 'YOUR_APP_SECRET',
              'USER_OAUTH_TOKEN', 'USER_OAUTH_TOKEN_SECRET')
requests.get(url, auth=auth)

Prepared Requests:
请求表示为数据结构

from requests import Request, Session

url = 'http://httpbin.org/post'
data = {
    'name': 'germey'
}
headers = {
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5'
}

s = Session()
req = Request('POST', url, data=data, headers=headers)
prepped = s.prepare_request(req)
r = s.send(prepped)
print(r.text)

将url, data, headers参数构成一个Request对象 然后最后还可以转化为一个Prepared Request对象
队列调度的时候会经常用到

猜你喜欢

转载自blog.csdn.net/qq_40258748/article/details/89439710