Flume + kafka + HDFS build log collection system

    Flume is a very good log collection component, similar to logstash. We usually deploy Flume as an agent on the application server to collect local log files and transfer the logs to data platforms such as HDFS and kafka. About Flume The principle and features will be explained in detail later. This article only briefly describes how to build a log collection system using Flume + kafka + HDFS.

    1) Flume: deployed as an agent in each application server, specifying the list of log files to be collected. The log files are usually generated by the application through logback, etc. (This article is based on Flume 1.7.0)

    2) kafka: Based on Flume, "quasi real-time" data is sent to kafka; for example, "tail" real-time data of a file. For real-time data analysis components or data consumers of the same type, real-time data can be obtained through kafka. (kafka 0.9.0)

    3) HDFS: Based on Flume, "historical data" is stored in HDFS, "historical data" such as "log files generated by daily rotate", the familiar catalina.out file, a new one is generated by rotate every day. Of course, "quasi-real-time" data can also be stored in HDFS, and Flume supports generating an HDFS file every hour for "tail" data. Typically, we keep "historical data" in HDFS, not "real-time data". (hadoop 2.6.5)

    4) For historical data, we transfer data to HDFS based on Flume's Spooling method; for "quasi real-time" data, we transfer data to kafka based on Flume's Tail method.

 

1. HDFS preparation

    First of all, we need a hadoop platform to save historical data. The data we collect is usually "log data". The process of building a hadoop platform will not be repeated here.

    We plan 5 hadoops, 2 namenodes are deployed based on HA, and 3 datanodes; the namenodes are 4Core, 8G, 200G configurations, the datanodes are 8Core, 16G, 2T configurations, and the blockSize is 128M (the log file size is generally about 2G, Every hour, about 100M), the number of replication is 2.

 

2. Kafka preparation

    The purpose of kafka is to receive "quasi-real-time" data. Due to the characteristics of kafka, we try not to allow kafka to store too much data, that is, the message consumer is as fast as possible (with the shortest possible interruption time). Our cluster is 4 kafka instances, 8Core, 16G, 2T configuration, the number of replications is 2, and the data persistence time is 7 days. Both kafka and hadoop depend on the zookeeper cluster, and the zk cluster is built additionally.

    The thing that tests the design is how to design the topic; when there are too many topics on the kafka cluster, such as assigning a topic to a "tail" file, it will bring huge challenges to the performance of kafka, and too many topics will lead to message consumption On the other hand, if there are too few topics, for example, all "tail" files in a project belong to one topic, then the data in the subtopic comes from multiple files, then the difficulty of data sorting will be get bigger.

 

    My personal design concept is: in a project, each "tail" file has a topic, no matter how much strength the project deploys, the same "tail" file is classified as a topic; for example, there is a business log in the order-center project pay.log, this project has 20 instances, our topic name is order-center-pay, then the order.log in these 20 instances will be collected into this topic, but in order to facilitate data sorting, order.log Each log will carry its own "local IP".

 

    Configuration example of kafka (server.properties): 

broker.id=1
listeners=PLAINTEXT://10.0.1.100:9092
port=9092
#host.name=10.0.1.100

num.network.threads=3
num.io.threads=8
num.io.threads=8
num.network.threads=8
num.partitions=1
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka

num.partitions=1
num.recovery.threads.per.data.dir=1
default.replication.factor=2
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000


zookeeper.connect=10.0.1.10:2181,10.0.1.11:2181,10.0.2.10:2181/kafka
zookeeper.connection.timeout.ms=6000
delete.topic.enable=true
min.insync.replicas=1
zookeeper.session.timeout.ms=6000

 

    In the above configuration, there are 2 places that need special attention: listeners and host.name, we specify the address and port bound to kafka in listeners, usually the local intranet IP, set host.name to empty, here If it is not set properly, Flume will not be able to find the kafka address (address resolve fails). The second point is the zookeeper.connect address. We add the root path after the address. After that, when Flume sends messages as the producer, the specified zookeeper address is also required. Bring this root path. In addition, there are some important parameters, such as replicas, partitions, etc.

 

    Kafka is not the focus of this article, so please refer to my other blog posts for more information.

 

3. Flume configuration

    According to our architecture design requirements, real-time data is sent to kafka, and historical data is sent to HDFS; Flume can fully meet our requirements. In Flume, Spooling mode can scan all files in a file directory and send new files. To HDFS; at the same time, in its TAILDIR mode, you can scan one (or more) files, continuously tail its latest appended information, and then send it to kafka. basic concept:

    1. source: source file, source data terminal, specify where Flume collects data (stream). Flume supports a variety of sources, such as "Avro source" (similar to RPC mode, receiving data Entity sent by remote Avro client), "Thrift Source" (data sent by Thrift client), "Exec Source" (returned by linux command data entry), "Kafka Source", "Syslog Source", "Http Source", etc.

    This article mainly involves Spooling and Taildir. Taildir is a new feature in 1.7. Before that, if you want to implement the tail feature, you need to use "Exec Source" to simulate, or develop your own code.

 

    2. Channel: A channel is simply a buffer pool of data streams. Data from multiple sources can be sent to a channel, and data can be cached, temporarily stored, and traffic shaped within the channel. Currently Flume supports "Memory Channel" (data is stored in memory with limited space), "JDBC Channel" (data is temporarily stored in the database to ensure recovery), "Kafka Channel" (temporarily stored in kafka), "File Channel" (temporarily stored in kafka) stored in a local file); in addition to Memory, other channels support persistence, which can provide an effective guarantee mechanism in scenarios such as failure recovery, sink offline or no sink, to avoid message loss and traffic resistance.

 

    3. sink: stream output, each channel can correspond to a sink, and each sink can specify a type of storage method. Currently, the most commonly used sink types supported by Flume are "HDFS Sink" (which saves data in hdfs). ), "Hive Sink", "Logger Sink" (special scenario, output data to console at INFO level, usually used for testing), "Avro Sink", "Thrift Sink", "File Roll Sink" (dump to local file system) and so on.

 

    This article does not introduce the features of Flume in detail. We only need to know some concepts briefly. The model of source, channel and sink is a pipeline. The data of one source can be "copied" to multiple channels (fan-out). Of course, multiple sources It can also be aggregated into a channel, each channel corresponds to a sink. Each type of source, channel, and sink has its own configuration properties for better control of data flow.

 

    Flume is developed in the java language, so before we start Flume, we need to set options such as the stack size of the JVM to prevent Flume from negatively affecting other applications on the host machine. In the conf directory, modify flume-env.sh:

export JAVA_OPTS="-Dcom.sun.management.jmxremote -verbose:gc -server -Xms1g -Xmx1g -XX:NewRatio=3 -XX:SurvivorRatio=8 -XX:MaxMetaspaceSize=128M -XX:+UseConcMarkSweepGC -XX:CompressedClassSpaceSize=128M -XX:MaxTenuringThreshold=5 -XX:CMSInitiatingOccupancyFraction=70 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/opt/flume/logs/server-gc.log.$(date +%F) -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=64M"

    I limit the JVM heap size of Flume to 1G. If your machine has a lot of free memory or collects a lot of data files, you can consider increasing this value appropriately.

 

    In addition, it is the startup configuration file of flume (flume-conf.properties). We simulate a scenario of collecting nginx logs with the following configuration:

 

##main
nginx.channels=ch-spooling ch-tail
nginx.sources=spooling-source tail-source
nginx.sinks=hdfs-spooling kafka-tail

##channel
nginx.channels.ch-spooling.type=file
nginx.channels.ch-spooling.checkpointDir=/data/flume/.flume/file-channel/ch-spooling/checkpoint
nginx.channels.ch-spooling.dataDirs=/data/flume/.flume/file-channel/ch-spooling/data
nginx.channels.ch-spooling.capacity=1000
nginx.channels.ch-spooling.transactionCapacity=100
nginx.channels.ch-spooling.capacity=100000

nginx.channels.ch-tail.type=file
nginx.channels.ch-tail.checkpointDir=/data/flume/.flume/file-channel/ch-tail/checkpoint
nginx.channels.ch-tail.dataDirs=/data/flume/.flume/file-channel/ch-tail/data
nginx.channels.ch-tail.capacity=1000
nginx.channels.ch-tail.transactionCapacity=100
nginx.channels.ch-tail.capacity=100000

##source, historical data
nginx.sources.spooling-source.type=spooldir
nginx.sources.spooling-source.channels=ch-spooling
##Specify the logs directory
nginx.sources.spooling-source.spoolDir=/data/logs/nginx
##Open the header, then the event will carry this header
nginx.sources.spooling-source.fileHeader=true
nginx.sources.spooling-source.fileHeaderKey=file
##header add file name
nginx.sources.spooling-source.basenameHeader=true
nginx.sources.spooling-source.basenameHeaderKey=basename
##After the log is sent, whether to delete this source file,
#"immediate" means to delete immediately after sending, which can save disk space
nginx.sources.spooling-source.deletePolicy=never
##The list of included files, we agree that all logs are rotated every day,
##Format is "<filename>.log-<yyyyMMdd>"
##The current log, will not be included.
nginx.sources.spooling-source.includePattern=^.*\.log-.+$
nginx.sources.spooling-source.consumeOrder=oldest
nginx.sources.spooling-source.recursiveDirectorySearch=false
nginx.sources.spooling-source.batchSize=100
nginx.sources.spooling-source.inputCharset=UTF-8
##If the codec fails, ignore the corresponding character.
nginx.sources.spooling-source.decodeErrorPolicy=IGNORE
nginx.sources.spooling-source.selector.type=replicating
nginx.sources.spooling-source.interceptors=i1 i2
##Using the timestamp interceptor will add a timestamp field to the event header
nginx.sources.spooling-source.interceptors.i1.type=timestamp
##Using the host interceptor will add the "host" field to the event header with the value ip
nginx.sources.spooling-source.interceptors.i2.type=host
nginx.sources.spooling-source.interceptors.i2.useIP=true
nginx.sources.spooling-source.interceptors.i2.hostHeader=host

nginx.sources.tail-source.type=TAILDIR
nginx.sources.tail-source.channels=ch-tail
##I don't want to write flume extension code, so I specify a group for each tail file
nginx.sources.tail-source.filegroups=www error
nginx.sources.tail-source.filegroups.www=/data/logs/nginx/www.log
nginx.sources.tail-source.filegroups.error=/data/logs/nginx/error.log
##For taildir, the location of the tail file needs to be saved intermittently, so that it can continue after interruption
##json format file
nginx.sources.tail-source.positionFile=/data/flume/.flume/ch-tail/taildir_position.json
##For each tail file, create a kafka topic
nginx.sources.tail-source.headers.www.topic=nginx-www
nginx.sources.tail-source.headers.error.topic=nginx-error
nginx.sources.tail-source.skipToEnd=true
nginx.sources.tail-source.interceptors=i1 i2
nginx.sources.tail-source.interceptors.i1.type=timestamp
nginx.sources.tail-source.interceptors.i2.type=host
nginx.sources.tail-source.interceptors.i2.useIP=true
nginx.sources.tail-source.interceptors.i2.hostHeader=host

##spooling historical data
nginx.sinks.hdfs-spooling.channel=ch-spooling
nginx.sinks.hdfs-spooling.type=hdfs
nginx.sinks.hdfs-spooling.hdfs.fileType=DataStream
nginx.sinks.hdfs-spooling.hdfs.writeFormat=Text
##Saved in hdfs, the path expresses the log classification information, the first level is <project>
##The second level is <date>
## is the day of the same item, aggregated by date.
nginx.sinks.hdfs-spooling.hdfs.path=hdfs://hadoop-ha/logs/nginx/%Y-%m-%d
##hdfs file name includes the host address where the source file is located to facilitate data sorting
nginx.sinks.hdfs-spooling.hdfs.filePrefix=%{basename}.[%{host}]
##For spooling files, the file name is as close as possible to the original name, so the suffix value is empty
nginx.sinks.hdfs-spooling.hdfs.fileSuffix=
##The file is suffixed with .tmp during the synchronization process
nginx.sinks.hdfs-spooling.hdfs.inUseSuffix=.tmp
##Do not roll new files according to the time interval
nginx.sinks.hdfs-spooling.hdfs.rollInterval=0
##1G, when the file size reaches 1G, scroll to generate a new file
nginx.sinks.hdfs-spooling.hdfs.rollSize=1073741824
##Do not scroll to generate new files according to the number of events
nginx.sinks.hdfs-spooling.hdfs.rollCount=0
##The IO channel will be closed after 60s of idle time
nginx.sinks.hdfs-spooling.hdfs.idleTimeout=60


##tail real-time data
nginx.sinks.kafka-tail.channel=ch-tail
nginx.sinks.kafka-tail.type=org.apache.flume.sink.kafka.KafkaSink
##kafka cluster address, which can be a subset of it
nginx.sinks.kafka-tail.kafka.bootstrap.servers=10.0.3.78:9092,10.0.4.78:9092,10.0.4.79:9092,10.0.3.77:9092
##Note that parameterization is not supported in topic
##But in order to improve scalability, we control the topic information through the header method
#nginx.sinks.kafka-tail.kafka.topic=nginx-%{filename}
##default 100, the larger the value, the higher the network efficiency, but the higher the delay, the quasi-real-time
nginx.sinks.kafka-tail.flumeBatchSize=32
nginx.sinks.kafka-tail.kafka.producer.acks=1
##use Avro-event format,will contain flume-headers
##default : false
nginx.sinks.kafka-tail.useFlumeEventFormat=false

 

    This is a very long configuration file. You can check the meaning of each configuration item on the official website. We need to pay attention to a few things:

    1) Checkpoint, data directory, it is best to specify, which is very helpful for future troubleshooting

    2) For channel, we need to explicitly declare its type. Usually, we use file, which helps to combat traffic, provided that the disk space where the specified directory is located should be relatively abundant and high-speed.

    3) The header will not actually be written to the sink, and the header information is only valid during the interaction of source, channel, and sink; we can mark an event flow feature through the header.

    4) For the spooling source, it is recommended to open the basename, which is the actual name of the file, and we can pass this header to the sink stage.

    5) All features related to batchSize need to be weighed: make reasonable decisions in sending efficiency and delay.

    6) Interceptor is a very important feature of Flume, which can help us do some custom operations after the source life cycle, such as adding headers, content correction, etc. At this time, we need to pay attention to some performance issues.

    7) For taildir, multiple values ​​can be specified in filegroups. My design principle is that one tail file corresponds to one group name. At present, there is no particularly good way to wildly match tail files, and we can only declare them one by one.

    8) For kafka sink, topic information can be specified by "kafka.topic" or specified by header (headers.www.topic, "www" corresponds to the group name, and "topic" is the key name of the header). For flexibility, I prefer to specify topics in headers.

    9) hdfs sink needs to pay attention to the timing of its roll. Currently, several parameters that affect the timing of roll are "minBlockReplicas", "rollInterval" (according to the time interval), "rollSize" (according to the file size), "rollCount" (according to the number of events) ; In addition, "round" related options can also interfere with the timing of rolling new files.

    The hdfs sink has tormented me for a long time. Flume will generate a new hdfs file every time it flushes, which will eventually lead to the generation of many small files. I hope that a tail file will eventually be a file in hdfs; Rolling generated files, usually my nginx log file does not exceed 1G, then I set rollSize to 1G, so as to ensure that it will not be rolled. In addition, each file in hdfs will have a "number" suffix. This number is an internal counter. There is currently no way to "eliminate" it through configuration. Let's accept it for now.

     The following is an example of log_format in nginx. In the first position of each log, we set $hostname to mark the source machine of this file, which is convenient for Kafka message consumers to sort data.

log_format  main  '$hostname|$remote_addr|$remote_user|$time_local|$request|'
                      '$status|$body_bytes_sent|$http_referer|$request_id|'
                      '$http_user_agent|$http_x_forwarded_for|$request_time|$upstream_response_time|$upstream_addr|$upstream_connect_time';

 

    For the configuration of flume, we can save it through zookeeper. This is a new feature in version 1.7. The configuration is centralized. You can refer to this method. However, considering the visibility of the configuration, I did not put the configuration in zookeeper, but on a configuration central control machine, and deploy flume through jenkins, each project is deployed in a distributed manner, and each node has a flume instance , they use the same configuration file, you can just scp a new configuration on the central computer when deploying flume. (This requires an automated deployment platform first)

    We see that the configuration items in the configuration file all start with "nginx". This prefix represents the name of the agent. We can name it according to the actual business, but it must be formulated when starting flume. In principle, a flume-conf.properties file You can declare multiple agent configuration items in , but we usually don't recommend this.

 

    We deploy flume on the machine where nginx is located, adjust the configuration file, and start it. Flume starts the script:

nohup bin/flume-ng agent --conf conf --conf-file flume-conf.properties --name nginx -Dflume.root.logger=INFO,CONSOLE -Dorg.apache.flume.log.printconfig=true -Dorg.apache.flume.log.rawdata=true

 

    In the above startup command, --config-file specifies the path and name of the configuration file, --name specifies the agent name (consistent with the configuration item prefix in the configuration file), and the logger information is INFO online, which can be used during the test. Specify "DEBUG,LOGFILE" for us to troubleshoot the problem.

 

Fourth, tomcat business log collection

    Regarding the collection of tomcat business logs by Flume, there are many points that need to be adjusted; the original intention of my design is:

    1) Collect all historical logs in HDFS, including catalina, access_log, business logs, etc.

    2) Kafka only collects access_log and specified business logs in real time; we can use these data for business monitoring, etc.

 

    1. Tomcat log format

    We first adjust logging.properties in tomcat:

1catalina.org.apache.juli.AsyncFileHandler.level = FINE
1catalina.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
##here
1catalina.org.apache.juli.AsyncFileHandler.prefix = catalina.log.
1catalina.org.apache.juli.AsyncFileHandler.suffix =

2localhost.org.apache.juli.AsyncFileHandler.level = FINE
2localhost.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
2localhost.org.apache.juli.AsyncFileHandler.prefix = localhost.log.
2localhost.org.apache.juli.AsyncFileHandler.suffix =

3manager.org.apache.juli.AsyncFileHandler.level = FINE
3manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
3manager.org.apache.juli.AsyncFileHandler.prefix = manager.log.
3manager.org.apache.juli.AsyncFileHandler.suffix =

4host-manager.org.apache.juli.AsyncFileHandler.level = FINE
4host-manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
4host-manager.org.apache.juli.AsyncFileHandler.prefix = host-manager.log.
4host-manager.org.apache.juli.AsyncFileHandler.suffix =

 

    Because the rolling format of the tomcat log file is "catalina.<yyyy-MM-dd>.log" by default, we should adjust it to "catalina.log.<yyyy-MM-dd>", we can achieve this through the above configuration method , in the end, we hope that whether it is tomcat's own log or application's business log, the file name format generated by rolling will be unified as "<filename>.log.<yyyy-MM-dd>", which is convenient for us to configure regular expressions in flume way to spooling these historical files.

 

    The configuration file of Flume is basically similar to that of nginx, so I won't go into details here.

 

   2. Business log

    We agreed that the business logs of the application are also printed in the ${tomcat_home}/logs directory, that is, in the same directory as catalina.out. Each business log generates a new historical file every day, and the file suffix is ​​".yyyy-MM-dd" At the end, such files are called history files and are synced to HDFS. For real-time log information, we still send it to kafka. The design idea of ​​kafka topic is the same as that of nginx. Each project file corresponds to a topic, and the logs of each file come from multiple application instances. They are confused in kafka topic. In order to To facilitate log sorting, we need to add an IP flag item to each log. I found that printing local ip in logback is not supported by default, so we need to work around it. We define an environment variable LOCAL_IP in the startup script of tomcat, and then introduce it in logback.xml to solve it.

##catalina.sh
##add
export LOCAL_IP=`hostname -I`

 

    You can declare it through the ${LOCAL_IP} variable in logback.xml in the project

    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_HOME}/order_center.log</file>
        <Append>true</Append>
        <prudent>false</prudent>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>${LOCAL_IP} %d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <FileNamePattern>${LOG_HOME}/order_center.log.%d{yyyy-MM-dd}</FileNamePattern>
            <maxHistory>72</maxHistory>
        </rollingPolicy>
    </appender>

 

    3. access_log log

    The access_log of tomcat is very important, and it can print a lot of information to help us analyze business problems, so we need to organize the access_log log standard; we can modify the following content in server.xml:

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
            prefix="localhost_access_log" suffix=".log" renameOnRotate="true"
            pattern="%A|%h|%m|%t|%D|"%r"|"%{Referer}i"|"%{User-Agent}i"|%s|%S|%b|%{X-Request-ID}i|%{begin:msec}t|%{end:msec}t" />

 

    "renameOnRotate" indicates whether to rename the access_log at the time of rotation. We set it to true, so that the file name of the access_log does not have a date format by default, and the time format is added during the rotation. "%A" represents the local ip address of the local machine, which is also a marker for kakfa sorting logs. X-Request-ID is a trace-ID customized by the nginx layer for tracking requests. If you do not set it, then can be removed.

 

    So far, we can basically complete this log collection system, and also pave the way for kafka to sort log information. Subsequent access to ELK, storm real-time data analysis, etc. will also be relatively easy.

 

Fifth, the problem summary :

    1、flume + hdfs:

    1) We first copy hdfs-site.xml, core-site.xml to the ${flume_home}/conf directory. And the flume machine can communicate with all hdfs nodes (network isolation and firewalls may cause them to fail to communicate normally).

    2) In the Flume root directory, create a plugins.d/hadoop directory, create lib, libext, and native subdirectories; and copy hadoop-related dependencies to the libext directory:

commons-configuration-1.6.jar
hadoop-annotations-2.6.5.jar
hadoop-auth-2.6.5.jar
hadoop-common-2.6.5.jar
hadoop-hdfs-2.6.5.jar
htrace-core-3.0.4.jar

 

    Also copy the following files to the native directory:

libhadoop.a
libhadooppipes.a
libhadoop.so.1.0.0
libhadooputils.a
libhdfs.a
libhdfs.so.0.0.0

 

    These dependency packages can be found in the hadoop deployment package.

 

    2. Startup exception:

2016-11-21 12:17:51,419 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:158)] Unable to deliver event. Exception follows.
java.lang.IllegalStateException: Channel closed [channel=ch-tail]. Due to java.io.IOException: Cannot lock /root/.flume/file-channel/checkpoint. The directory is already locked. [channel=ch-tail]

 

    The error description is: The file has been locked and cannot be locked further. Solution: If there are multiple channels of file type in a flume, they should use different data directories by modifying the default configuration.

 

    3、hdfs sink:

    The value of hdfs.fileSuffix does not support parameterization. I hope to use headers in fileSuffix, such as hdfs.fileSuffix=%{filename}. Later, I tried many times and found that Flume does not support it temporarily.

 

    4. In the Spooling mode, the collected log files will be renamed with the suffix ".COMPLATED". If a file with the same name is artificially created again, Flume will report an error and stop collecting data.

    5. Runtime exception:

Nov 2016 17:15:04,737 WARN  [kafka-producer-network-thread | producer-1] (org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.handleResponse:582)  - Error while fetching metadata with correlation id 96 : {nginx-www=UNKNOWN}

 

    The problem of this error is that flume cannot establish a connection with the kafka cluster and cannot obtain meta information; usually, you need to modify the server.properties file in kafka and adjust the "listeners", "host.name" configuration items That's it; where "listeners" clearly specifies the intranet IP bound to the machine, and "host.name" remains the default or does not declare.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326533238&siteId=291194637