Python网络爬虫—请求头

注意事项:

  1. header一定要根据自己浏览器和爬虫网址配(已配图!)
  2. response.status_code返回值为200才说明网页正常打开
  3. requests.get()写在 try/except 内,否则偶尔会抛异常
  4. except Exception 抛异常时,要有解决方法
  5. 定义uft-8编码格式或者其他
  6. requests库方法:https://www.cnblogs.com/mzc1997/p/7813801.html
# -*- coding: utf-8 -*-
import requests
import time
def main():

    url='http://www.baidu.com/s?'

    #请求头
    header = {

         'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
         'Connection': 'keep-alive',
         'Host': 'www.baidu.com',
         
         'Cookie':'BAIDUID=D56765923DE7AD2EFD370BF413FC3356:FG=1; BIDUPSID=D56765923DE7AD2EFD370BF413FC3356; PSTM=1556119827; BD_UPN=12314753; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; delPer=0; BD_CK_SAM=1; BDUSS=dqNWpGbkZhM2l0YVA0Rzd6RW5VZmRBY2hCZmZ0NnY1NGdRdi1DSS05dlctM2xkRUFBQUFBJCQAAAAAAAAAAAEAAABrUKOiQWlyYm95b25lAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANZuUl3WblJdLT; BD_HOME=1; BDRCVFR[feWj1Vr5u3D]=I67x6TjHwwYf0; COOKIE_SESSION=2019_0_7_3_8_12_0_2_3_4_0_3_2217_0_197_0_1565683539_0_1565683342%7C9%23578981_63_1565144212%7C8; BDRCVFR[4r8LXJfwh-6]=I67x6TjHwwYf0; PSINO=1; H_PS_PSSID=; H_PS_645EC=2371mNE8T079tH0eyK1r%2FzADsUxkfOEI%2FLdu0iBAFODL1SzVm0GbTpS3lTqohuJ6g4Irz%2BwQRgpn',
         #'Referer': '',
         'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'
         
        }

    data={'wd':'邓超'}

    try:
        #写入User Agent信息
        response = requests.get(url,params = data, headers=header)
        #读取响应信息并解码
        if response.status_code==200:
            html = response.text
    except Exception:
        print('error')
        time.sleep(5)
        return

    #打印信息
    print(html)

main()

 配图:

发布了39 篇原创文章 · 获赞 27 · 访问量 4135

猜你喜欢

转载自blog.csdn.net/qq_43381887/article/details/99457922