scrapy爬虫没有任何的返回数据( Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min))

在scrapy中爬取不到任何返回值。

G:\scrapy_tesy>scrapy crawl douban
2019-07-11 10:26:15 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: scrapy_tesy)
2019-07-11 10:26:15 [scrapy.utils.log] INFO: Versions: lxml 4.2.4.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.1, Python 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 08:06:12) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1c  28 May 2019), cryptography 2.7, Platform Windows-10-10.0.17134-SP0
2019-07-11 10:26:15 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'scrapy_tesy', 'NEWSPIDER_MODULE': 'scrapy_tesy.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['scrapy_tesy.spiders']}
2019-07-11 10:26:15 [scrapy.extensions.telnet] INFO: Telnet Password: ff2bfbc35ae333e7
2019-07-11 10:26:15 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2019-07-11 10:26:15 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-07-11 10:26:15 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-07-11 10:26:15 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-07-11 10:26:15 [scrapy.core.engine] INFO: Spider opened
2019-07-11 10:26:15 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-07-11 10:26:15 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-07-11 10:26:15 [scrapy.core.engine] DEBUG: Crawled (403) <GET http://douban.com/robots.txt> (referer: None)
2019-07-11 10:26:15 [scrapy.core.engine] DEBUG: Crawled (403) <GET http://douban.com/> (referer: None)
2019-07-11 10:26:15 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <403 http://douban.com/>: HTTP status code is not handled or not allowed
2019-07-11 10:26:15 [scrapy.core.engine] INFO: Closing spider (finished)
2019-07-11 10:26:15 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 428,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 462,
 'downloader/response_count': 2,
 'downloader/response_status_count/403': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 7, 11, 2, 26, 15, 927988),
 'httperror/response_ignored_count': 1,
 'httperror/response_ignored_status_count/403': 1,
 'log_count/DEBUG': 2,
 'log_count/INFO': 10,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/403': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2019, 7, 11, 2, 26, 15, 613169)}
2019-07-11 10:26:15 [scrapy.core.engine] INFO: Spider closed (finished)

出现这个问题的解决方法是修改scrapy的settings.py文件:

# Obey robots.txt rules
# ROBOTSTXT_OBEY = True
ROBOTSTXT_OBEY = False # 将上面的True注释掉,改为False即可解决问题

这个配置是检测网站的robot.txt文件,看看网站是否允许爬取,如果不允许自然是不能。所以需要改为False。这样就不用询问robot.txt了。

发布了59 篇原创文章 · 获赞 18 · 访问量 3万+

猜你喜欢

转载自blog.csdn.net/weixin_38091140/article/details/95455412