【爬虫】Scrapy 自定义下载器中间件

【原文链接】https://doc.scrapy.org/en/latest/topics/downloader-middleware.html

Writing your own downloader middleware

Each middleware component is a Python class that defines one or more of the following methods:

class scrapy.downloadermiddlewares.DownloaderMiddleware

Note

Any of the downloader middleware methods may also return a deferred (延迟的).

process_request(requestspider)

This method is called for each request that goes through the download middleware.

process_request() should either: return None, return a Response object, return a Request object, or raise IgnoreRequest.

If it returns None, Scrapy 会继续处理这个请求, 一直执行其他所有的中间件,直到最终合适的下载器 handler 被调用并且请求被处理 (且响应被下载).

If it returns a Response object, Scrapy 不会调用任何其它 process_request()process_exception() 方法, 或 the appropriate 下载函数; 而是会返回 that response. 安装好的中间件的 process_response() 方法对每个响应都会被调用.

扫描二维码关注公众号,回复: 2523233 查看本文章

If it returns a Request object, Scrapy 会停止调用 process_request() 方法,并且会 reschedule 被返回的请求. 一旦新被返回的请求被执行,合适的中间件 chain will be called on the downloaded response.

If it raises an IgnoreRequest exception, 被安装的下载器中间件的 process_exception() 方法会被调用. 如果他们中没有能够处理异常的, 请求的 errback 方法会被调用 (Request.errback). If no code handles the raised exception, it is ignored and not logged (unlike other exceptions).

Parameters:
  • request (Request object) – the request being processed
  • spider (Spider object) – the spider for which this request is intended

process_response(略)

process_exception(requestexceptionspider)

当一个下载 handler or a process_request() (from a downloader middleware) 抛出异常 (including an IgnoreRequest exception)时,Scrapy 会调用 process_exception() 方法.

process_exception() 应该返回: either None, a Response object, or a Request object.

If it returns None, Scrapy 会继续处理这个异常, 执行安装的中间件的其他 process_exception() 方法, 直到没有中间件被剩下,然后默认的异常处理 kicks in.

If it returns a Response object, 已安装中间件的 process_response() method chain is started, and Scrapy won’t bother calling any other process_exception() methods of middleware.

If it returns a Request object, 被返回的请求 is rescheduled to 下载 in the future. 这会停止中间件的 process_exception() 方法的执行 the same as returning a response would.

Parameters:
  • request (is a Request object) – the request that generated the exception
  • exception (an Exception object) – the raised exception
  • spider (Spider object) – the spider for which this request is intended

from_crawler(clscrawler),

如果有此方法,该类方法会被调用创建一个中间件实例 from a Crawler. 它必须返回中间件的一个新实例. Crawler 对象对所有 Scrapy 核心组件,比如 settings 和 signals 提供 access; 它是中间件 access them and hook its functionality into Scrapy 的一种方式.

Parameters:

crawler (Crawler object) – crawler that uses this middleware

猜你喜欢

转载自blog.csdn.net/sinat_40431164/article/details/81233935
今日推荐