爬虫CASE01:反爬策略之使用随机user-agent模拟浏览器的网页爬取

知识点:

  1. 熟悉爬虫方法:使用urllib中的request方法实现网页爬取
  2. 通过设置 User-Agent 模拟浏览器,实现初级反爬策略
  3. 从user-agent池中随机选取1个user-agent的方法:random.choice(seq)的巧妙使用

需补充的知识点:

  1. 异常捕获和处理

需拓展的知识点:

  1. 采用 python3的 import requests 模块进行爬取网页的方法掌握
  2. 多进程 / 多线程 / 协程的使用
# -*- coding: utf-8 -*-
"""
Created on Thu Jul 12 19:56:21 2018

@author: Administrator
"""

# 目标: 使用百度过来的user-agent大全,做一个user-agent池,对新浪的首页发起10次请求,每次发起请求的uers-agent需要随机从池中取出
# NOTES:
# random方法:
# random.choice():   
        #   choice(seq) method of random.Random instance:
        #   Choose a random element from a non-empty sequence.

from urllib import request
import random
import time

# 构造user-agents池
def UAPool():
    userAgents = [
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
            "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
            "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
            "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
            "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
            "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
            "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
            "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
            "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
            "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
            "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
            "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
            "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
            "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",]
    return userAgents

# 构造随机选取user-agent的函数
def headersRandom():
    lists = UAPool()
    headers = {'User-Agent':random.choice(lists)}       # headers 为dict
    return headers

# 构造打印user-agent的函数
def printHeaders():
    head_name = headersRandom()
    print(head_name)
    return

# 构造爬取网页的函数: 使用urllib中的request方法
def download(url):
    headers = headersRandom()
    response = request.urlopen(request.Request(url, headers=headers)).read().decode('utf-8')
    return response

# 构造main函数
if __name__ == '__main__':
    # 设置循环爬取的次数
    times = 10
    # 循环初始化
    i= 1    
    # while loop循环 实现爬取网页和随机选取uesr-agent的功能并输出user-agent
    while i <= times:
        # 初始化target url
        url = 'https://www.douban.com/'
        # 执行download函数爬取网页
        download(url)
        # 执行完一次后print headers
        print('执行第%d次输出的UA:'% i)
        # 执行printHeaders函数
        printHeaders()
        i += 1
        # 设置随机爬取间断时间
        clock0 = time.clock()
        time.sleep(random.random())
        clock = time.clock() - clock0
        print('执行时间:',clock)
        print('\n')

猜你喜欢

转载自blog.csdn.net/weixin_40040404/article/details/81022295
今日推荐