爬虫之scrapy和splash 结合爬取动态网页

scrapy和splash 结合爬取动态网页

  1. 安装scrapy-splash:
    pip install scrapy-splash
  2. 安装splash:
    sudo docker pull scrapinghub/splash
  3. 运行splash:
    docker run -it -d -p 8050:8050 --name splash scrapinghub/splash
  4. 编写scrapy:
    1. 设置settings.py:
SPLASH_URL = 'http://xxx.xxx.xxx.xxx:8050' # splash的url
       DOWNLOADER_MIDDLEWARES = {
        'scrapy_splash.SplashCookiesMiddleware': 723,
        'scrapy_splash.SplashMiddleware': 725,
        'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
         }
        SPIDER_MIDDLEWARES = {
        'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
         }
  1. 编写spider:
    今日头条为例子:
from scrapy.selector import Selector

import scrapy
from scrapy_splash import SplashRequest

import sys
reload(sys)
sys.setdefaultencoding("utf8")


class MySpider(scrapy.Spider):
   name = 'ddd'

   def start_requests(self):
       url = 'https://www.toutiao.com/'
       yield SplashRequest(url=url, callback=self.parse, args={'wait': 0.5}, dont_filter=True)

   def parse(self, response):
       xbody = Selector(response=response)
       title = xbody.xpath("//p[@class='title']/text()").extract()
       for i in title:
           print str(i).encode("gbk", 'ignore')  # 乱码

猜你喜欢

转载自blog.csdn.net/jianmoumou233/article/details/79832644