filebeat-kafka-elk build

Log collection, the streaming of this document structures filebeat, kafka, elk, parses, storage, visualization system.

I. Description

    1.k8s of POD (container) mounted volumes, the data persistence within the container to the host disk directory.

    2.filebeat directory on the host monitor, collect production log data.

    3.kafka log data message transmission queue, Topic created, used to store data.

    Log data 4.logstash consumption of kafka, stored elasticsearch, while the index.

    5.elasticsearch store log data.

    6.kibana found index, search log data sets, visualization.

II. & Configuration environment to build

    Build:

    1.filebeat, kafka adopt official website to download the tar.gz package.

    2.elk using docker compose download mirrors.

      

    Configuration:

    1.filebeat

      Input: Disk log path

      

      Output: kafka production side (the transmission data sets can be found according to topic)

      

      If separate kinana, designated Kinana the host, as follows:

      

    2.kafka decompression:

      kafka listening ip and port:

      

      zookeeper listening port:

      

      Create a topic (called elk): bin / kafka-topics.sh --create --zookeeper 10.0.6.25:2181 --replication-factor 1 --partitions 1 --topic elk

      Find a topic: bin / kafka-topics.sh --list --zookeeper 10.0.6.25:2181

      Create a clear success:

      

    3.logstash:

      Input (Kafka consumption, which specifies Topic); output (data storage, creating an index elasticsearch)

      

    4.elasticsearch:

      

    5.kinana:

      

Three .kinana: http: //10.0.6.114: 5601

    managerment:

    index patterns,create index pattern :elk    

     discover: log data sets can be seen

    So far, the environment to build success.

IV. Run

    1.filebeat,zookeeper,kafka,run-zkf.sh

    

    2.elk:

      docker-compose up -d

      docker-compose.yml:

      

the above

 

Guess you like

Origin www.cnblogs.com/frantz/p/11415527.html