03 ELK log system - an upgraded version of the ELK log cluster system integration project springboot

  Preface: ELK log the entire system has been set up well, the next process is to:

    logback log springboot project configuration tcp transmitted through the springboot all log data to the project ---- "logstash, and then spread to the log data collected by the logstash come ------" elasticsearch cluster --- ----- "the last show of kibana.

 

 

1. Prepare a springboot project, and configure log logback

    1.1, springboot project demo how to create not write, pom.xml to configure logback package dependency and logstash package dependencies, as follows:

    <-! Add Logback log -> 
        <dependency> 
            <groupId> org.springframework.boot </ groupId> 
            <artifactId> the Spring-the Boot-Starter-logging </ artifactId> 
        </ dependency> 

        <-! Logstash integration logback using logback logstash output data to the server's log -> 
        <dependency> 
            <the groupId> net.logstash.logback </ the groupId> 
            <the artifactId> logstash-logback-Encoder </ the artifactId> 
            <Version> 4.11 </ Version> 
        </ dependency>

 

 

  1.2, then the spring is integrated with logback profile logback-spring.xml, there are arranged as follows :( logback transmitted to logstash)

<? XML Version = " 1.0 " encoding = " UTF-. 8 " ?> 

<-! from high to low OFF, FATAL, ERROR, WARN, the INFO, the DEBUG, the TRACE, ALL -> 
<-! log output according to the current rules will output ROOT level, the log output level is higher than the default root level -> 
<! - each of the following filter configured to filter out the output file is inside, there will be a high-level document, still appears low level log information, filtering only logs recorded at this level filter -> 


<-! attribute description scan: property set to true, if the configuration file is changed, will be reloaded, the default is true scanPeriod: monitoring setting if the configuration file with a modified time interval, if the time unit is not given, the default milliseconds. When the scan is true, this property take effect. The default interval is 1 minute. 
    debug: When this property is set to true, will print logback internal log information, real-time view logback running. The default value is false. -> 
<= Configuration Scan " to true " scanPeriod = " 60 seconds The "to false " > 
    <-! custom log file input position -> 
    <Property name = " LOG_DIR " value = " / logs / JZ-Project " /> 
    <-! history log maximum 30 days -> 
    <Property name = " maxHistory " value = " 30 " /> 




    <-! the ConsoleAppender console output log -> 
    <the appender name = " STDOUT "  class = " ch.qos.logback.core.ConsoleAppender " > 
        <-! log format ->  
        <Encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger -%msg%n</pattern>
        </ Encoder> 
    </ appender> 


    <! - ERROR level log -> 
    <! - rolling log file, logging to the specified file first, when a condition matches, logging to another file RollingFileAppender-- > 
    <the appender name = " ERROR "  class = " ch.qos.logback.core.rolling.RollingFileAppender " > 
        <-! filter, WARN level log record only -> 
        <filter class = " ch.qos.logback .classic.filter.LevelFilter " >  
            <level> ERROR </ level > 
            <onMatch> ACCEPT </onMatch>
            <onMismatch>DENY</onMismatch>
        </ filter> 
        <-!. the most common strategy rolling, rolling it to develop strategies based on the time responsible for both the scroll is also responsible for starting the scroll -> 
        <rollingPolicy class = " ch.qos.logback .core.rolling.TimeBasedRollingPolicy"> 
            <! - position relative to the log output, an absolute path -> 
            <fileNamePattern> {$ LOG_DIR} / {% D-YYYY} /error-log.log the MM-dd </ fileNamePattern> 
            <! - Optional the maximum number of nodes, archive control to keep, delete old files exceed the number of assumptions scroll setting month, and <maxHistory> 6, 
            only the last 6 months to save files, old files before deleting. Note that deleting old files, those directories created for the archive will be deleted -> 
            <maxHistory> $ {maxHistory} </ maxHistory> 
        </ rollingPolicy> 

        <-! Log files are generated according to a fixed window mode, when the file greater than 20MB, generate a new log file. The window size is 1-3, after saving three archive will overwrite the oldest log.
        <rollingPolicy class > 
          <fileNamePattern> {$ LOG_DIR} / {D% the MM-dd-YYYY} /. log.= " Ch.qos.logback.core.rolling.FixedWindowRollingPolicy " 
          <minIndex> 1 </ minIndex> 
          <maxIndex> 3 </ maxIndex> 
        </ rollingPolicy> -> 
        <-! View the current size of the active file, if more than specify the size tells RollingFileAppender trigger currently active file scroll
         <triggeringPolicy class = " ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy " > 
            <maxFileSize> 5MB </ maxFileSize> 
        </ triggeringPolicy> -> 

        <Encoder> 
            <pattern>% d {yyyy-mM-dd HH : mm: ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder> 
    </ the appender> 



    <-! WARN log level appender ->
    <appender name="WARN" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <!-- 过滤器,只记录WARN级别的日志 -->
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>WARN</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- 按天回滚 daily -->
            <fileNamePattern>${log_dir}/%d{yyyy-MM-dd}/warn-log.log
            </fileNamePattern>
            <!-- 日志最大的历史 60天 -->
            <maxHistory>${maxHistory}</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
    </appender>




    <!-- INFO级别日志 appender -->
    <appender name="INFO" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <!Log filter, to record only INFO level - -> class
        <filter = " ch.qos.logback.classic.filter.LevelFilter">
            <level>INFO</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- 按天回滚 daily -->
            <fileNamePattern>${log_dir}/%d{yyyy-MM-dd}/info-log.log
            </fileNamePattern>
            <!-- 日志最大的历史 60天 -->
            <maxHistory>${maxHistory}</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
    </appender>




    <!-- DEBUG级别日志 appender -->
    <appender name="DEBUG" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <!-- 过滤器,只记录DEBUG级别的日志 -->
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>DEBUG</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- 按天回滚 daily -->
            <fileNamePattern>${log_dir}/%d{yyyy-MM-dd}/debug-log.log
            </fileNamePattern>
            <!-- 日志最大的历史 60天 -->
            <maxHistory>${maxHistory}</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
    </appender>




    <!-- TRACE级别日志 appender -->
    <appender name="TRACE" class=">
        <! - filter, only ERROR level log record ->ch.qos.logback.core.rolling.RollingFileAppender"
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>TRACE</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- 按天回滚 daily -->
            <fileNamePattern>${log_dir}/%d{yyyy-MM-dd}/trace-log.log
            </fileNamePattern>
            <! - history log maximum of 60 days ->
            <maxHistory>${maxHistory}</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
    </appender>
<!--
    <logger name="java.sql.PreparedStatement" value="DEBUG" />
    <logger name="java.sql.Connection" value="DEBUG" />
    <logger name="java.sql.Statement" value="DEBUG" />
    <logger name="com.ibatis" value="DEBUG" />
    <logger name="com.ibatis.common.jdbc.SimpleDataSource" value="DEBUG" />
    <logger name="com.ibatis.common.jdbc.ScriptRunner" level="DEBUG"/>
    <logger name="com.ibatis.sqlmap.engine.impl.SqlMapClientDelegate" value="DEBUG" />
    -->


    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender " > 
        <Where do you want> 192.168 . 26.233 : 9601 </ Where do you want> <- Specifies logstash ip:! tcpAppender listening port may be implemented as kafka own transmission, -> 
        <Encoder charset = " UTF -8 "  class = " net.logstash.logback.encoder.LogstashEncoder " > 
            <-! " appname " : " yang_test " effect is created with the specified name index, and document generation in this field will be more -> 
            <CustomFields> { " APPNAME " :"zj_test" } </ CustomFields> 
        </ Encoder> 

    </ the appender> 




    <-! So can print out SQL -> 
    <Logger name = " com.jzproject.mapper " Level = " the DEBUG " /> 


    <-! The root level DEBUG, (I modified into info) -> 
    <root Level = " info " > 
        <-! console output -> 
        <appender- ref  ref = " STDOUT " /> 
        <-! file output -> 
        <appender- REF  REF = " ERROR " />
        <appender-ref  ref = "INFO" />
        <appender-ref ref="WARN" />
        <appender-ref ref="DEBUG" />
        <appender-ref ref="TRACE" />
        <appender-ref ref="LOGSTASH" />
    </root>
</configuration>

 

 

  All the server is turned ELK service.

 

 

 

 

 

 

Cattle go. . .

 

Guess you like

Origin www.cnblogs.com/spll/p/10950908.html