log4j2 log output group fragmentation

introduce

Purpose: To realize the log fragmentation operation of SpringBoot+log4j2.

Problem: Most projects start with:

nohup java -Xms512m -Xmx512m -Dspring.config.location=./config/bootstrap.yml -Dspring.profiles.active=test -jar xxx.jar --SERVER_NAME=$NAME >> log.log &

The disadvantage is that it can only be viewed in real time through the tail -f command, but log files exceeding 100M cannot be opened, or the opening is very slow. At this time, the log needs to be divided and stored according to size or time period.

The original nohub log can be removed:

nohup java -jar xxx.jar >/dev/null 2>&1 & 

Command description:

2>&1 error output will be output to the same place as standard output

>/dev/null The standard output is output to the black hole, that is: let the output disappear.

The first step is to introduce dependencies

<properties>
    ......
    <java.version>1.8</java.version>
  	<!-- log4j2的版本必须是2.15.0+ -->
    <log4j2.version>2.15.0</log4j2.version>
</properties>

 [Note: The logs in spring-boot-starter-log4j2 and spring-boot-starter-web may conflict, so the logs in spring-boot-starter-web must be removed]

  <!-- web依赖 -->
  <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
      <!-- 排除 logging,使用log4j2 -->
      <exclusions>
          <exclusion>
              <groupId>org.springframework.boot</groupId>
              <artifactId>spring-boot-starter-logging</artifactId>
          </exclusion>
      </exclusions>
  </dependency>

  <!-- 引入log4j2依赖 -->
  <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-log4j2</artifactId>
  </dependency>

The second step is to write xml configuration

<?xml version="1.0" encoding="UTF-8"?>
<!--日志级别以及优先级排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL -->
<!--Configuration后面的status,这个用于设置log4j2自身内部的信息输出,可以不设置,当设置成trace时,你会看到log4j2内部各种详细输出-->
<!--monitorInterval:Log4j能够自动检测修改配置文件和重新配置本身,设置间隔秒数-->
<configuration status="WARN" monitorInterval="30">
<Properties>
    <!-- 配置日志文件输出目录 -->
    <!--  <Property name="LOG_HOME">${sys:catalina.home}/logs/bootdemo/</Property>-->
    <Property name="LOG_HOME">D:\log</Property>
</Properties>

<!--先定义所有的appender-->
<appenders>
    <!--这个输出控制台的配置-->
    <console name="Console" target="SYSTEM_OUT">
        <!-- 只输出info及以上级别的信息 -->
        <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
        <!--输出日志的格式:eg:10:41:25.084[时间] INFO[日志等级]  com.unicloud.sc.thirdapp.RegisterH5Application[类名] 29[行] main[方法名] - [消息内容] -->
        <PatternLayout pattern="[%d{HH:mm:ss.SSS} %-5level] [%class{36} %L %M] - %msg%xEx%n"/>
    </console>

    <!--文件会打印出所有信息,这个log每次运行程序会自动清空,由append属性决定,这个也挺有用的,适合临时测试用-->
    <File name="log" fileName="${LOG_HOME}/test.log" append="false">
        <PatternLayout pattern="[%d{HH:mm:ss.SSS} %-5level] [%class{36} %L %M] - %msg%xEx%n"/>
    </File>

    <!-- 这个会打印出所有的info及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档-->
    <RollingFile name="RollingFileInfo" fileName="${LOG_HOME}/info.log" filePattern="${LOG_HOME}/${date:yyyy-MM}/info-%d{yyyy-MM-dd}-%i.log">
        <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)-->
        <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
        <PatternLayout pattern="%d{HH:mm:ss.SSS} %-5level %class{36} %L %M - %msg%xEx%n"/>
        <Policies>
            <TimeBasedTriggeringPolicy/>
            <!-- SizeBasedTriggeringPolicy:默认值是 10 MB -->
            <SizeBasedTriggeringPolicy size="10 MB"/>
        </Policies>
    </RollingFile>

    <!-- 这个会打印出所有的warn及以下级别的信息-->
    <RollingFile name="RollingFileWarn" fileName="${LOG_HOME}/warn.log" filePattern="${LOG_HOME}/${date:yyyy-MM}/warn-%d{yyyy-MM-dd}-%i.log">
        <ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/>
        <PatternLayout pattern="%d{HH:mm:ss.SSS} %-5level %class{36} %L %M - %msg%xEx%n"/>
        <Policies>
            <!-- interval:Integer 根据日期格式中最具体的时间单位来决定应该多久发生一次rollover。例如,在日期模式中小时为具体的时间单位,那么每4小时会发生4次rollover,默认值为1 -->
            <!-- modulate:Boolean 表示是否调整时间间隔以使在时间间隔边界发生下一个rollover。例如:假设小时为具体的时间单元,当前时间为上午3点,时间间隔为4,第一次发送rollover是在上午4点,接下来是上午8点,接着是中午,接着是下午4点等发生。 -->
            <TimeBasedTriggeringPolicy interval="1" modulate="true"/><SizeBasedTriggeringPolicy/>
        </Policies>
        <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件,这里设置了20 -->
        <DefaultRolloverStrategy max="20"/>
    </RollingFile>

    <!-- 这个会打印出所有的error及以下级别的信息-->
    <RollingFile name="RollingFileError" fileName="${LOG_HOME}/error.log" filePattern="${LOG_HOME}/${date:yyyy-MM}/error-%d{yyyy-MM-dd}-%i.log">
        <ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/>
        <PatternLayout pattern="%d{HH:mm:ss.SSS} %-5level %class{36} %L %M - %msg%xEx%n"/>
        <Policies>
            <TimeBasedTriggeringPolicy/>
            <!-- SizeBasedTriggeringPolicy:默认值是 10 MB -->
            <SizeBasedTriggeringPolicy size="10 MB"/>
        </Policies>
    </RollingFile>
</appenders>

<!--然后定义logger,只有定义了logger并引入的appender,appender才会生效-->
<loggers>
    <!--过滤掉spring和mybatis的一些无用的DEBUG信息-->
    <logger name="org.springframework" level="INFO"></logger>
    <logger name="org.mybatis" level="INFO"></logger>
    <root level="all">
        <appender-ref ref="Console"/>
        <appender-ref ref="log"/>
        <appender-ref ref="RollingFileInfo"/>
        <appender-ref ref="RollingFileWarn"/>
        <appender-ref ref="RollingFileError"/>
    </root>
</loggers>

</configuration>

The third part, write code test

Just write an interface, just request multiple times

illustrate:

log.info() writes info.log

log.error() writes to error.log

log.warn() writes to warning.log

All other detailed logs will be written to test.log according to the configuration

Test Results

First set the size of SizeBasedTriggeringPolicy to 10KB: <SizeBasedTriggeringPolicy size="10 KB"/>

Test the fragmentation test results with the info level log set to 10kb: It can be seen from the figure that the info.log file exceeding 10k has been sealed into a monthly fragmented folder, named according to the format: info-2022-01- 29-1.log

Guess you like

Origin blog.csdn.net/JohnGene/article/details/122743280