scrapy-redis增量式爬虫

1 在scrapy爬虫的框架上setting.py中加上这四句

DUPEFILTER_CLASS = “scrapy_redis.dupefilter.RFPDupeFilter”
#指定了调度器的类
SCHEDULER = “scrapy_redis.scheduler.Scheduler”
#调度器的内容是否持久化
SCHEDULER_PERSIST = True
REDIS_URL = “redis://127.0.0.1:6379”

2 要保存结果在redis中的话开启item_piplines:
ITEM_PIPELINES = {
‘example.pipelines.ExamplePipeline’: 300,
‘scrapy_redis.pipelines.RedisPipeline’: 400,
}

猜你喜欢

转载自blog.csdn.net/zhushixia1989/article/details/84851918