Scrapy redis中爬取卡死 解决(清除缓存)

环境: win10 py3.6 pycharm scrapy1.6

类似报错信息

2017-05-19 20:08:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:08:53 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-05-19 20:09:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:10:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:11:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:12:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:13:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
...

解决方案: 打开redis-cli.exe 输入以下命令

flushdb
lpush Noverspider:start_urls http://www.daomubiji.com/     # 这里的http://www.daomubiji.com/ 根据自己的实际情况填写

在博文 Scrapy爬取盗墓笔记0.2版 可见实际应用
参考 : https://www.jianshu.com/p/219ccf8e4efb

猜你喜欢

转载自blog.csdn.net/qq_40258748/article/details/88750902
今日推荐