Python爬虫:解决SSL证书验证问题

如果目标网站没有设置好HTTPS证书,又或者网站的HTTPS证书不被CA机构认可,用浏览器访问的话,就可能会出现SSL证书错误的提示。

用requests库来请求这类网站的话,会直接抛出SSLError错误。

requests.exceptions.SSLError: HTTPSConnectionPool(host='ssr2.scrape.center', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1129)')))

这时候可以加入verify参数来关闭验证有效性的功能(verify默认值为True)

import requests

r = requests.get('https://ssr2.scrape.center/', verify=False)
print(r.status_code)

C:\Users\batman\AppData\Roaming\Python\Python39\site-packages\urllib3\connectionpool.py:1013: InsecureRequestWarning: Unverified HTTPS request is being made to host 'ssr2.scrape.center'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(

200

不过,返回的结果还是会带上一个丑了吧唧的警告(建议给它指定证书),如果想眼不见为净的话,有三个选择:

1.直接设置忽略警告

import requests
from requests.packages import urllib3

urllib3.disable_warnings()
r = requests.get('https://ssr2.scrape.center/', verify=False)
print(r.status_code)

2.通过捕获警告到日志的方式来忽略警告

import requests
import logging

logging.captureWarnings(True)
r = requests.get('https://ssr2.scrape.center/', verify=False)
print(r.status_code)

3.也可以指定一个本地证书用作客户端证书(这可以是单个文件,也可以是一个包含两个文件路径的元组)

import requests

r = requests.get('https://ssr2.scrape.center/', verify=False, cert=('/path/server.crt', '/path/server.key'))
print(r.status_code)

<完>

猜你喜欢

转载自blog.csdn.net/weixin_58695100/article/details/123066138