Scrapy爬虫 -- 编写下载中间件,实现随机User-Agent

Scrapy爬虫 -- 编写下载中间件,实现随机User-Agent

实现步骤:

1. 在middlewares.p中,新建一个下载中间件;

2. 创建process_request方法(引擎发送request对象到下载器时的回调函数),实现随机User-Agent的功能;

3. 在settings.py文件中,配置新建的下载中间件。

实现随机User-Agent的中间件代码如下:

# middlewares.py
import random

class RandomUserAgentDownloaderMiddleware(object):
    """随机user-agent--下载中间件"""
    def process_request(self, request, spider):
        first_num = random.randint(55, 62)
        third_num = random.randint(0, 3200)
        fourth_num = random.randint(0, 140)
        os_type = [
            '(Windows NT 6.1; WOW64)', '(Windows NT 10.0; WOW64)', '(X11; Linux x86_64)',
            '(Macintosh; Intel Mac OS X 10_12_6)'
        ]
        chrome_version = 'Chrome/{}.0.{}.{}'.format(first_num, third_num, fourth_num)

        user_agent = ' '.join(['Mozilla/5.0', random.choice(os_type), 'AppleWebKit/537.36',
                       '(KHTML, like Gecko)', chrome_version, 'Safari/537.36']
                      )
        # 把每个request请求都设置为随机user_agent
        request.headers['User-Agent'] = user_agent

        return None    # 返回值为None, 表示继续请求

猜你喜欢

转载自blog.csdn.net/Refrain__WG/article/details/82346931