python scrapy框架爬取豆瓣top250电影篇一代理编写

版权声明:zhiyu https://blog.csdn.net/ichglauben/article/details/82559426

爬虫伪装:

UA中间件编写
这里写图片描述

这里写图片描述

settings设置
这里写图片描述

from scrapy import signals
import  base64
import  random

class my_useragent(object):
    def process_request(self,request,spider):
        USER_AGENT_LIST = [
                 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)',
                 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)',
                 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6726.400 QQBrowser/10.2.2265.400',
                 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; .NET4.0E)',
                 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; .NET4.0E)',
                 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; .NET4.0E; SE 2.X MetaSr 1.0)'
        ]
        agent = random.choice(USER_AGENT_LIST)
        request.headers['User_Agent'] = agent

猜你喜欢

转载自blog.csdn.net/ichglauben/article/details/82559426