logback and log4j

 

logback and log4 J is written by one person ,

springboot default logging framework is logback.

logback mainly by logback-core: infrastructure other modules, other modules based on its build, provides a key general mechanism ,

                           logback-classic: lightweight log4j implementation, to achieve a simple logging facade SLF4J ,

                           logback-access: main module as a servlet container and the interaction

The configuration file structure logback.xml

                               

Detailed configuration file contents:

<configuration scan="true" scanPeriod="60 seconds" debug="false">
<property name="glmapper-name" value="glmapper-demo" />
<contextName>${glmapper-name}</contextName>
<appender> //xxxx </appender>
<logger> //xxxx </logger>
<root> //xxxx </root>
</configuration>
1, the root < Configuration> contains the following three properties:
scan: When this bit property is set true, if the configuration file is changed, will be reloaded, the default is true;
scanPeriod: Set if there are modified profiles monitoring interval, if the time unit is not given, the default milliseconds. When the scan is true, this property into effect, the default time interval is one minute.
debug: When this property is set to true, will print logback internal log information, real-time view logback running, the default value is false.
<configuration scan="true" scanPeriod="60 seconds" debug="false"> 
      <!--其他配置省略--> 
    </configuration>
2, the child node < contextName can >: used to set the context name, is associated to each logger logger context, the default name context to default, may be used <contentName> Other names provided for recording distinguishing different applications. Once modified, can not be changed.
<configuration scan="true" scanPeriod="60 seconds" debug="false"> 
      <contextName>myAppName</contextName> 
<! - other configurations will be omitted -> 
    </ Configuration>
3, the child node < Property >: used to define the value of a variable, it has two properties name and value, are inserted into the context of a value logger <property> defined, can be made "$ {}" to use variables.
name: variable name, value: the value of the variable definition
<configuration scan="true" scanPeriod="60 seconds" debug="false"> 
      <property name="APP_Name" value="myAppName" /> 
      <contextName>${APP_Name}</contextName> 
      <!--其他配置省略--> 
    </configuration>
4, the child node < timestamp >: Get timestamp string, key, and datapattern
key: to identify this <timestamp> name
datepattern: Set the current time to convert the pattern string following the format java.txt.SimpleDateFormat
<configuration scan="true" scanPeriod="60 seconds" debug="false"> 
      <timestamp key="bySecond" datePattern="yyyyMMdd'T'HHmmss"/> 
      <contextName>${bySecond}</contextName> 
      <!-- 其他配置省略--> 
    </configuration>
5, the child node < appender >: the component responsible for writing the log, there are two essential attributes: name and class
name: Specifies the appender name, the fully qualified name of the class specified appender
  5.1, the ConsoleAppender the log output to the console: <encoder>: log format, <target>: string System.out
<configuration> 
      <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> 
      <encoder> 
        <pattern>%-4relative [%thread] %-5level %logger{35} - %msg %n</pattern> 
      </encoder> 
      </appender> 

      <root level="DEBUG"> 
        <appender-ref ref="STDOUT" /> 
      </root> 
    </configuration>
5.2, FileAppender : to add to the log file,
        file: the file name is written, you can make relative directory, or an absolute directory, if the parent directory does not exist will be created automatically, with no default
        append: If it is true, the log is appended to the end of the file, if it is false, empty the existing file, the default is true
        encoder: to record the event format
         prudent: If true, logs are written to security, the default is false.
onfiguration> 
      <appender name="FILE" class="ch.qos.logback.core.FileAppender"> 
        <file>testFile.log</file> 
        <append>true</append> 
        <encoder> 
          <pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern> 
        </encoder> 
      </appender> 

      <root level="DEBUG"> 
      <appender-ref ref="FILE" /> 
      </root> 
    </configuration>
5.3 RollingFileAppender : rolling log file, logging to the specified file first, when a condition matches, the log records to other documents,
            file: the file name is written, you can make relative directory, or an absolute directory, if the parent directory does not exist is automatically created
            append: If true, the log is appended to the end of the file, if it is false, empty the existing file
           rollingPolicy: When scrolling occurs, determine the behavior RollingFileAppender involving moving and renaming files,
Specific property defines the scrolling policy class class
      class = "ch.qos.logback.core.rolling.TimeBasedRollingPolicy": The most commonly used strategy rolling, rolling it to develop strategies based on time, responsible for both the scroll is also responsible for starting the scroll. Has the following child nodes:
        <fileNamePattern>: necessary node contains the file name and "% d" conversion specifier, "% d" may comprise a java.text.SimpleDateFormat specified time format, such as:% d {yyyy-MM} .
If the direct use of% d, the default format is yyyy-MM-dd. RollingFileAppender the file byte point dispensable, by setting the file, you can specify a different location for active files and archive files, always record the current log file to the specified file (active document), the name of the active file does not change;
if not set file, the file name of the event will be based on the value fileNamePattern change every once in a while. "/" Or "\" will be treated as a directory separator.
        <maxHistory>:
Optional node, control the maximum number of reserved archive, delete old files exceeds the number. Assuming that the scrolling of each month, and <maxHistory> 6, only the last 6 months save files, old files before deleting. Note that deleting old files, those directories created for the archive will be deleted.

      class = "ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy": see the size of the current active file, if more than the specified size will inform RollingFileAppender trigger currently active document scrolling. Only one node:
        <maxFileSize>:
        <prudent>: When is true, it does not support FixedWindowRollingPolicy. Support TimeBasedRollingPolicy, but there are two restrictions, one does not support file compression is not allowed, 2 not set the file attributes, you must be left blank.

      <triggeringPolicy>: RollingFileAppender inform appropriate activated scrolling.
      class = "ch.qos.logback.core.rolling.FixedWindowRollingPolicy" according to a fixed window algorithm rename the scroll policy file. It has the following child nodes:
        <minIndex>: window index Min
        <maxIndex>: maximum window index, when the user designates the window is too large, automatically the window is set to 12.
        <fileNamePattern>: must contain "% i" For example, assume that minimum and maximum values are 1 and 2, designated mode mylog% i.log, and will have the archive mylog1.log mylog2.log. You can also specify the file compression options, e.g., mylog% i.log.gz or not log% i.log.zip
      example:
        <Configuration> 
          <the appender name = "the FILE" class = "ch.qos.logback.core.rolling. the RollingFileAppender "> 
            <rollingPolicy class =" ch.qos.logback.core. 
              <fileNamePattern> YYYY-the logFile% {D} .log the MM-dd </ fileNamePattern>. 
              <maxHistory> 30 </ maxHistory> 
            </ rollingPolicy> 
            <Encoder> 
              <pattern>% - 4relative [Thread%]%% -5level Logger {35} -% MSG% n-</ pattern> 
            </ Encoder> 
          </ the appender> 

          <the root Level = "the DEBUG"> 
            <the appender-REF REF = "the FILE" /> 
          </ the root> 
        </ configuration>
        above-described configuration represents generate a log file every day, for 30 days of log files.
6, the child node Loger : to set the logging level to print one specific packet or a certain class, and designated <appender>, may contain zero or more <appender-of> element, will be added to identify the appender this loger
     name: loger by this constraint is used to specify a certain packet or a certain specific class
     level: the level used to set the print, case insensitive: TRACE, DEBUG, INFO, WARN, ERROR, ALL and OFF
7, the child node root : loger is an element, but it is loger root, all loger is superior, there is only one level attribute, then the name has since been named as the root, and is already the most superior
level: the level used to set the print, case insensitive: TRACE, DEBUG, INFO, WARN, ERROR, ALL and OFF
Commonly used loger configuration:

<!-- show parameters for hibernate sql 专为 Hibernate 定制 -->
<logger name="org.hibernate.type.descriptor.sql.BasicBinder" level="TRACE" />
<logger name="org.hibernate.type.descriptor.sql.BasicExtractor" level="DEBUG" />
<logger name="org.hibernate.SQL" level="DEBUG" />
<logger name="org.hibernate.engine.QueryParameters" level="DEBUG" />
<logger name="org.hibernate.engine.query.HQLQueryPlan" level="DEBUG" />

<!--myibatis log configure-->
<logger name="com.apache.ibatis" level="TRACE"/>
<logger name="java.sql.Connection" level="DEBUG"/>
<logger name="java.sql.Statement" level="DEBUG"/>
<logger name="java.sql.PreparedStatement" level="DEBUG"/>

logback replace log4j reason of:

1 faster implementation: logback kernel rewrite, not only to enhance performance, and initialize the memory load becomes smaller.

2, very well tested: logback completely different levels of test

3, logback-classic achieve a very natural SLF4j.

4, very full documentation

5, logback-classic automatically reload the configuration file

6, Lilith is an event log viewer, can handle large amounts of log data

7, discreet and very friendly mode of recovery

8, the configuration files can handle different situations

9, Filters need to diagnose a problem, need to print out the log

10, SiftingAppender: a very versatile Appender: can be used to split the log file according to any given operating parameters.

11, automatic compression has been playing out of log

12, the stack package version with a tree

13, automatically remove old log files

 

 

You want to see more details, please click: https: //blog.csdn.net/zbajie001/article/details/79596109

 

Guess you like

Origin www.cnblogs.com/cye9971-/p/11391689.html