Python crawler on untrusted ssl certificate

solutions for requests

I also wrote about untrusted certificates in my blog before, such as ( blog address );

Insert picture description here
But the writing is not perfect. Now if it's just this simple
Insert picture description here
, the result will still be an error.
Here is due to the official mandatory addition of the requested security certificate verification, so the following statement must be added

import urllib
urllib3.disable_warnings()
reqs=requests.get(url=root_url,headers=headers,verify=False)

It's OK without importing urllib3

requests.packages.urllib3.disable_warnings()
reqs=requests.get(url=root_url,headers=headers,verify=False)

urllib's solution

This is the same as in the previous blog

from urllib import request
import re
import os
import ssl
context = ssl._create_unverified_context()
省略若干代码
 b = request.urlopen(url, timeout=tolerate,context = context).read().decode('gb2312', 'ignore')
省略若干代码

Or like this

from urllib import request
import re
import os
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
省略若干代码

Guess you like

Origin blog.csdn.net/FUTEROX/article/details/108230623