Scrapy Settings级别 & 如何获取settings & 一些可能用到的setting

http://doc.scrapy.org/en/1.0/topics/settings.html

一、有5种类型的Settings,优先级由高到低

1. 命令行选项

使用-s复写设置,如scrapy crawl myspider -s LOG_FILE=scrapy.log

2. 每个spider的setting

scrapy.spiders.Spider.custom_settings属性设置

3. 项目的setting

myproject.settings,项目下的settings文件

4. 每个命令的默认setting

5. 默认setting


二、获取settings值

经常需要在pipeline或者中间件中获取settings的属性,可以通过scrapy.crawler.Crawler.settings属性

class MyExtension(object):

    @classmethod
    def from_crawler(cls, crawler):
        settings = crawler.settings
        if settings['LOG_ENABLED']:
            print "log is enabled!"  

三、一些设置:

1. DNS_TIMEOUT默认60s

2. DOWNLOADER_MIDDLEWARES下载中间件

3. DOWNLOAD_DELAY两次下载的间隔

4. DOWNLOAD_TIMEOUT默认180s

5. DOWNLOAD_MAXSIZE默认1024M

6. LOG_ENABLED

7. LOG_FILE默认是NONE

8. LOG_LEVEL,默认是DEBUG,即打印DEBUG, INFO, WARNING, ERROR,所有LOG信息

9. LOG_STDOUT,默认是false,所有的标准输出是否放在log中

10. MEMDEBUG_ENABLED,默认是false

11. RANDOMIZE_DOWNLOAD_DELAY默认是true, 等待0.5-1.5*DOWNLOAD_DELAY时间,防止被禁

12. USER_AGENT默认是”Scrapy/VERSION (+http://scrapy.org)

猜你喜欢

转载自blog.csdn.net/Bridge320/article/details/79262967
今日推荐