Python study notes 9--logging module logging

When we write programs, we often type some logs to help us find problems. This time, we will learn about the logging module and how to operate logs in python.
Introduce the logging module. The logging module is the module used to operate logs in python. There are four main classes in the logging module, which are responsible for different tasks:

Logger logger, which exposes an interface that application code can use directly; simply put, it is an office that creates an office for people to work in. 

Handler The processor sends the log records (generated by the logger) to the appropriate destination; this is simply the person who does things. You can specify whether to control the output log or print the log in a file. Commonly used are 4 kind: 

                StreamHandler console output 

                FileHandler file output

                The following two need to be imported

                        handlers
                        from logging import handlers 

                        TimedRotatingFileHandler automatically splits log files by time 

                        RotatingFileHandler automatically splits the log file by size and regenerates the file once the specified size is reached 

                Filter
, which provides finer granularity control over which log records to output. (uncommonly used) 

                Formatter
formatter that specifies the layout of the log records in the final output. Specifies the format of the output log

import logging
from logging import handlers
#Only print log in console
logging.basicConfig(level=logging.ERROR, #Log level printed by the console
                    format=
                    '%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s'
                    #log format
                    )
logging.debug('debug level, the lowest level, generally used by developers to print some debugging information')
logging.info('info level, normal output information, generally used to print some normal operations')
logging.warning('waring level, generally used to print warning information')
logging.error('error level, generally used to print some error messages')
logging.critical('critical level, generally used to print some fatal error messages')

Log level debug < info < warning < error < critical
After the log level is set, this level and all logs higher than this level will be printed. For example, if the log level is warning, then warning, error, and critical will be printed. These three levels of logs will not print debug and info levels. If it is debug, the lowest level log, then all logs will be printed.
The above only prints the log on the console, and does not write the log in the file. Generally, we will write the log in the log file, which is also very simple. Just add a parameter to specify the file name.

logging.basicConfig(level=logging.ERROR, #Log level printed by the console
                    filename='log.txt',#filename
                    filemode='a',# mode, there are w and a, w is the write mode, the log will be rewritten every time, overwriting the previous log
                    #a is the append mode, if not written by default, it is the append mode
                    format=
                    '%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s'
                    #log format
                    )
logging.debug('debug level, the lowest level, generally used by developers to print some debugging information')
logging.info('info level, normal output information, generally used to print some normal operations')
logging.warning('waring level, generally used to print warning information')
logging.error('error level, generally used to print some error messages')
You can specify the output format format of the log. This parameter can output a lot of useful information, such as the following formats:
%(levelno)s: print the value of the log level
 
 
%(levelname)s: print log level name
 
 
%(pathname)s: Print the path of the currently executing program, which is actually sys.argv[0]
 
 
%(filename)s: print the name of the currently executing program
 
 
%(funcName)s: the current function to print the log
 
 
%(lineno)d: print the current line number of the log
 
 
%(asctime)s: the time to print the log
 
 
%(thread)d: print thread ID
 
 
%(threadName)s: print thread name
 
 
%(process)d: print process ID
 
 
%(message)s: print log messages
 
 
The usual format I give at work has been seen before. that is:
format='%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s'
This format can output the printing time of the log, which line of the file is output, what the output log level is, and the input log content.

After adding the file name, you will find that the console will not output the log, and the log file will also be generated, so how to output the log on the console and write it in the file?

 

 

How to do it? You have to have an office with two people in it. One is responsible for outputting logs to the console, and the other is responsible for writing files. If you put them in the office, they will be able to work.

import logging
from logging import handlers
logger = logging.getLogger('my_log')
#Create a logger object first, which is equivalent to this office, which is the Logger mentioned above
logger.setLevel(logging.INFO)#Set the total level of the log
fh = logging.FileHandler('test.log',mode='a',encoding='utf-8')#Create a file handler, that is, write the log to the file
fh.setLevel(logging.INFO)#Set the level of file output
sh = logging.StreamHandler()#Create a console output handler, these two are the Handlers mentioned above
sh.setLevel(logging.INFO)
#Set the log level of console output. Both levels can be set separately. The difference between them and the logger level is that if the level set by the logger is higher than the level set by the handler inside, then the level of the logger shall prevail.
th = handlers.TimedRotatingFileHandler('time',when='S',interval=1,backupCount=2)
#Specify the processor that automatically generates files at intervals
#interval is the time interval, backupCount is the number of backup files, if it exceeds this number, it will be automatically deleted, when is the time unit of the interval, and the units are as follows:
            # S seconds
            # M points
            # H hours,
            # D days,
            # W every week (interval==0 represents Monday)
            # midnight every morning
th.setLevel(logging.INFO)
formater = logging.Formatter('%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s')
#Specify the log format, we wrote the common format above, just specify it directly, which is the Formatter we mentioned above
sh.setFormatter (formats)
fh.setFormatter (formats)
th.setFormatter (formats)
#Set the log format of the two processors
 
logger.addHandler(sh)
logger.addHandler(fh)
logger.addHandler(th)
#Adding two handlers to the container is equivalent to training the staff, and you can go to work
logger.debug('debug level, the lowest level, generally used by developers to print some debugging information')
logger.info('info level, normal output information, generally used to print some normal operations')
logger.warning('waring level, generally used to print warning information')
logger.error('error level, generally used to print some error messages')
logger.critical('critical level, generally used to print some fatal error messages')

In this way, the log office of logger has been completed, and we can use it directly. After running, it is found that the file is also generated, and the console also has logs. If no log level is set, the default level is waring.
Next, we encapsulate a class ourselves to use the logging module, which is convenient to use, and some configurations are added by default.

import logging
from logging import handlers
class Logger(object):
    level_relations = {
        'debug':logging.DEBUG,
        'info':logging.INFO,
        'warning':logging.WARN,
        'error':logging.ERROR,
        'crit':logging.CRITICAL
    } #Log level relationship mapping
    def __init__(self,fp,level='debug',when='midnight',interval=1,backCount=5,encoding='utf-8'):
        '''
 
        :param fp: log file path
        :param level: The default log level is debug
        :param when: The unit of dividing the log is S seconds, M minutes, H hours, D days, W every week (interval==0 represents Monday), midnight every morning
        :param interval: The time interval defaults to every morning in the morning
        :param backCount: The default number of backup files is 5
        :param encoding: log file encoding
        '''
        self.level = self.level_relations.get(level)
        self.logger = logging.getLogger(fp)
        self.logger.setLevel(self.level)
        fmt = logging.Formatter('%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s')
        sh = logging.StreamHandler()
        sh.setFormatter(fmt)
        sh.setLevel(self.level)
        th = handlers.TimedRotatingFileHandler(fp,when=when,interval=interval,backupCount=backCount,encoding=encoding)
        th.setFormatter(fmt)
        th.setLevel(self.level)
        self.logger.addHandler(th)
        self.logger.addHandler(sh)
    def debug(self,msg):
        self.logger.debug(msg)
    def info(self,msg):
        self.logger.info(msg)
    def warning(self,msg):
        self.logger.warning(msg)
    def error(self,msg):
        self.logger.error(msg)
    def crit(self,msg):
        self.logger.critical(msg)
if __name__ == '__main__':
    l = Logger('a.log')#Instantiation
    l.info('hehehe')#call

  

  

  

  

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325077921&siteId=291194637