Python3爬虫requests使用

get请求

import requests
from fake_useragent import UserAgent

headers={
    'User-Agent':UserAgent().chrome
}

url='http://www.xxx.com/s'
params={
    'wd':'python'
}
response=requests.get(url,headers=headers,params=params)
print(response.text)

post请求

data={
    'usr':'123',
    'pwd':'123456'
}
response=requests.post(url,headers=headers,data=data)

代理proxy

proxies={
    'http':'http://usr:pwd@ip:port'
}
response=requests.get(url,headers=headers,proxies=proxies)

https访问

headers={
    'User-Agent':UserAgent().chrome
}

url='https://www.xxx.com'
#SSL警告
requests.packages.urllib3.disable_warnings()
response=requests.get(url,verify=False,headers=headers)

Cookie

session=requests.Session()
response=session.post(url,headers=headers,data=data)

响应信息

#字串符
resp.text 
#字节
resp.content
#json
resp.json()
#响应头内容
resp.headers
#访问地址
resp.url
#编码
resp.encoding
#请求头内容
resp.request.headers
#cookie
resp.cookies
发布了26 篇原创文章 · 获赞 0 · 访问量 588

猜你喜欢

转载自blog.csdn.net/kkLeung/article/details/105424399