python crawler 5 - requests Request Library

Using the library need to install, more convenient to handle Cookies, login authentication, proxy settings and so on.

In the urlopen urllib () GET method request based on the actual page, in response to requests GET (), it can also be used additionally post (), put (), delete (), etc. to achieve POET, PUT, DELETE and so on request.

1 common usage

1.1 GET request

 

 

 

 If the page is returned in JSON format, you can call json () method into a dictionary format.

1.2 POST request

 

 1.3 Properties

  • requests attributes:
  • and text content: content of the response;
  • status_code: status code;
  • headers: response header;
  • cookies: Cookies information;
  • url: url;
  • history: history request

2 Advanced Usage

2.1 file upload

Code:

import requests

files = { 'file': open ( 'file name', 'rb)}

res = requests.post(url,files=files)

2.2 Cookies

Registered Users can see headers in the cookies copy down the contents of the package in the headers to maintain login state.

2.3 session to maintain

Using the Session, you can simulate the same session, then the next step is usually used to simulate operation after successful login.

2.4 SSL certificate validation

It can be added to verify a parameter transmission request, the default True.

2.5 Proxy Settings

With proxies parameters

2.6 timeout settings

timeout parameter, the default is None. And connecting the read requests into two phases, set timeout time is the sum of the two phases

2.7 Authentication

req = requests.get(url,auth=('username','userpass')

 

 

Guess you like

Origin www.cnblogs.com/rong1111/p/12143007.html