Crawler scrapy framework -- log log output configuration and use

1. Set the log output file name and log level in the configuration file

1. Why use the date as the file name?

Because this makes it easier for developers to view the daily log information, and also prevents the accumulation of more and more log information in a single file, the log information of the current day is saved to the log file of the current day.

2. The configuration options are log level LOG_LEVEL, log file path LOG_FILE, I set the level to WARNING here

write picture description here

2. Import the logging module in the program, and log the content that needs to be output

I used the WARNING level when I configured it, then I defined 100 1s and this is an exception as WARNING output (logging.warning), then the information higher than or equal to this level can be output to my log, Information below this level will not be output in my log information

The following are warning-level information, both of which are output to the log log file

write picture description here

write picture description here

The following is an info, a warning level information, only warning level information is output to the log log file

write picture description here

write picture description here

3. Expansion

Scrapy provides 5 logging levels:

CRITICAL - critical error

ERROR - general error

WARNING - Warning message

INFO - General Information

DEBUG - debug information

logging
settings can be used to configure logging by making the following settings in setting.py:

LOG_ENABLED Default: True, enable logging

LOG_ENCODING Default: 'utf-8', the encoding used by logging

LOG_FILE Default: None, the filename to create the logging output file in the current directory

LOG_LEVEL Default: 'DEBUG', the lowest level of log

My experience is limited, any corrections are welcome

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326861126&siteId=291194637