Python crawler learning (two) --requests module

requests module

1. Installation:

1. Enter on the command line:

pip install requests

2. Wheel installation:

Download the corresponding wheel file: link: https://pypi.python.org/pypi/requests/version#downloads

Then enter the wheel file directory on the command line and use pip to install

pip install 文件名

2. Basic usage

(1) GET request method:


import requests

r = requests.get(url='',params='' )
print(r.text)

Let's make a request to Baidu URL:


import requests

r = requests.get(url='https://www.baidu.com')

print(type(r))  # 查看响应类型
print(r.status_code)  # 打印状态码
print(r.cookies)  # 返回cookies

Print result:


<class 'requests.models.Response'>
200
<RequestsCookieJar[<Cookie BDORZ=27315 for .baidu.com/>]>

GET request for additional information:

Set the params parameter and pass in the dictionary

import requests

data = {
    
    
        'name' : 'tom',
        'age' : '22'
    }

r = requests.get(url='https://httpbin.org/get' , params=data)

print(r.text)

The results are as follows:

  "args": {
    "age": "22", 
    "name": "tom"
  }, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip, deflate", 
    "Host": "httpbin.org", 
    "User-Agent": "python-requests/2.25.0", 
    "X-Amzn-Trace-Id": "Root=1-5ff01765-639962aa3f8c52747fa309f5"
  }, 
  "origin": "111.18.93.192", 
  "url": "https://httpbin.org/get?name=tom&age=22 "
}

At this point the url is spliced ​​intohttps://httpbin.org/get?name=tom&age=22

Use get to distribute and crawl the GitHub site icon:

import requests

r = requests.get(url='https://github.com/favicon.ico' )
with open('favicon.ico','wb') as f: #结果保存为favicon.ico
    f.write(r.content)

(2) POST request:

In the post request, use data to pass parameters.

import requests

data = {
    
    'name':'tom','age':'22'}
r = requests.post('https://httpbin.org/post', data = data)
print(r.text)

3. Advanced usage

(1) File upload: The
uploaded file must be in the same directory as the script.


import requests

files = {
    
    'file' : open('favicon.ico','rb')}
r = requests.post('https://httpbin.org/post',files=files)
print(r.text)

(2)Cookies

Use requests to get cookies:


import requests

r = requests.get(url='https://www.baidu.com' )
print(r.cookies)
for key,value in r.cookies.items():
    print(key + '=' + value)

First call the cookies attribute to get Cookies, then use the items() method to transform into a list of tuples, and traverse and output each cookie.

(3) Session maintenance:

Session object:

In requests, directly using get or post to request a web page actually produces different sessions, which is equivalent to using two browsers to open different pages.

Session can easily maintain a session without worrying about cookies.

Examples are as follows:


import requests

requests.get(url='https://httpbin.org/cookies/set/number/123465789' )
r = requests.get(url='https://httpbin.org/cookies' )
print(r.text)

Make a request to a test website, set a cookie named number with a value of 123456789, and then request this URL to try to get its cookies.

The running result did not capture its cookies:

{
	 "cookies": {}
}

Try it with Session:


import requests

s = requests.Session()
s.get(url='https://httpbin.org/cookies/set/number/123465789' )
r = s.get(url='https://httpbin.org/cookies' )
print(r.text)

The result of the operation is successful to obtain cookies

	{
    "cookies": {
    "number": "123465789"
           }
	}

(4) SSL certificate verification:

Request https site to avoid SSLError. This problem can be solved by setting the verify parameter in the requests.get() method to Fales.

requests.get(url='',verify=Fales)

(5) Proxy settings: proxies parameters

import requests

proxies = {
    
    
	'http' : '代理网址1'
	'https' : '代理网址2'
}
requests.get('url' ,proxies=proxies)

(6) Identity verification: auth parameter

When visiting the website, authentication may be performed.
At this time, you can use the authentication function that comes with requests:

import requests
from requests.auth import HTTPBasicAuth

r = requests.get('url' , auth =  HTTPBasicAuth('username' , 'password'))

Reference and recommend books: https://cuiqingcai.com/5052.html

Guess you like

Origin blog.csdn.net/qq_45742511/article/details/112095845