ELK + Kafka + Filebeat

ELK + Kafka + Filebeat learning

https://blog.csdn.net/qq_21383435/article/details/79463832

https://blog.csdn.net/xiangyuan1988/article/details/78977471

https://www.jianshu.com/p/f149a76ea5b5

https://blog.csdn.net/qq_21383435/article/category/7486820

ELK + Kafka + Filebeat learning
directory:
1. ELK + Kafka + Filebeat architecture
2. Why use ELK
3. Verification process: Filebeat->Logstash->Kafka->Logstash->Elasticsearch->Kibana

1 ELK + Kafka + Filebeat Architecture

Figure 1.1 ELK + Kafka + Filebeat architecture diagram

l Filebeat:

Log collection.

l Logstash:

Official website description: Logstash: Collect, Enrich and Transport. Collect, enrich and transmit, Logstash is a tool for managing events and logs. It can be used to collect logs, parse them, and then store them for later use (eg to search).

l Shipper:

A client that runs on each target system and is responsible for collecting log messages.

l Kafka:

Being a message queue decouples processing while improving scalability. With peak processing capacity, the use of message queues can enable key components to withstand sudden access pressures without completely crashing due to sudden overloaded requests.

l Kerberos:

A secure network authentication system based on shared-key symmetric encryption, which avoids transmitting the password on the network, but uses the password as a symmetric encryption key to verify the user's identity by whether it can be decrypted.

l Indexer:

Responsible for aggregating logs and forwarding logs to elasticsearch in the manner specified by the administrator.

l Elasticsearch:

Official website description: Elasticsearch: Store, Search and Analyze. Storage, search and analysis for storing all logs.

l Search Guard:

Search Guard is a security plugin for Elasticsearch. It provides authentication and authorization for backend systems such as Kerberos, and adds audit logging and document/field level security to Elasticsearch.

l Kibana:

Official website description: Kibana: Explore, Visualize and Share. Browse, visualize and share, providing a log analysis friendly web interface for Logstash and ElasticSearch to help summarize, analyze and search important data.

2 Why use ELK
l For development, you can log in to Kibana to get the log, instead of going through operation and maintenance, which reduces the work of operation and maintenance, and also facilitates management of operation and maintenance;

l It can solve the problem that a single log file is huge, and the commonly used text tools are difficult to analyze and retrieve;

l There are many types of logs, which are scattered in different locations and difficult to find. Through ELK, unified management can be performed;

3 Verification process: Filebeat->Logstash->Kafka->Logstash->Elasticsearch->Kibana
software version description:

Filebeat version: 5.2.1

Logstash version: 5.2.2

Kafka version: 0.10.1

Elasticsearch version: 5.2.2

Kibana version: 5.2.2

jdk version: 1.8.0_112

(1) filebeat is installed through yum, so modify the host location of logstash in output.logstash corresponding to the filebeat.yml file in the filebeat configuration file /etc/filebeat directory, as well as the path of the log file to configure monitoring, what is monitored here is The dengqy.log log file in the /root/remoa directory.

Figure 3.1 Screenshot 1

Figure 3.2 Screenshot 2

(2) View the hosts location of the broker server in kafka:

Figure 3.3 Screenshot 3

(3) View the host location of the server in Elasticsearch: In the /opt/package/elasticsearch-5.2.2/config directory, check the IP address of the host in the elasticsearch.yml file and the port running the HTTP service. The port annotation uses the default port 9200.

Figure 3.4 Screenshot 4

(4) View the location of the keys file used for Elasticsearch authentication:

Figure 3.5 Screenshot 5

(5) Configure the remotest2.conf file as filebeat input logstash and logstash output kafka.

[plain] view plain copy
input {
beats{
port => 5044
}
}
output {
stdout{codec => rubydebug}
kafka{
topic_id => "dengqytopic"
bootstrap_servers => "hdp1.example.com:9092"
security_protocol => "SASL_PLAINTEXT"
sasl_kerberos_service_name => "kafka"
jaas_path => "/tmp/kafka_jaas.conf.demouser"
kerberos_config => "/etc/krb5.conf"
compression_type => "none"
acks => "1"
}
}

(6)配置remoatest3.conf文件作为kafka input logstash以及logstash output elasticsearch。

[plain] view plain copy
input{
kafka{
bootstrap_servers => "hdp1.example.com:9092"
security_protocol => "SASL_PLAINTEXT"
sasl_kerberos_service_name => "kafka"
jaas_path => "/tmp/kafka_jaas.conf.demouser"
kerberos_config => "/etc/krb5.conf"
topics => ["dengqytopic"]
}
}
output{
stdout{
codec => rubydebug
}
elasticsearch{
hosts => ["kdc1.example.com:9200","kdc2.example.com:9200"]
user => logstash
password => logstash
action => "index"
index => "logstash-dengqy-%{+YYYY.MM.dd}"
truststore => "/opt/package/logstash-5.2.2/config/keys/truststore.jks"
truststore_password => whoami
ssl => true
ssl_certificate_verification => true
codec => "json"
}
}

(7) Start consumer monitoring:

KAFKA_HEAP_OPTS="-Djava.security.auth.login.config=/usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf -Xmx512M" /opt/package/kafka_2.10-0.10.1.0/bin/kafka-console-consumer.sh --topic dengqytopic --bootstrap-server hdp1.example.com:9092 --from-beginning --consumer.config /opt/package/kafka_2.10-0.10.1.0/config/consumer.properties

Figure 3.6 Screenshot 6

(8) Start the remoteest2.conf script:

bash ../bin/logstash -f remoatest2.conf

Figure 3.7 Screenshot 7

(9) Start the remoteest3.conf script:

bash bin/logstash -f config/remoatest3.conf

Figure 3.8 Screenshot 8

(10) Start filebeat collection:

service filebeat start

Figure 3.9 Screenshot 9

(11) Check the stdout{codec => rubydebug} standard output configured in remoteest3.conf:

Figure 3.10 Screenshot 10

(12)查看到remoatest2.conf中配置的stdout{codec => rubydebug}标准输出:

图3.11 截图11

(13)查看到消费者中处理日志:

图3.12 截图12

(14)在Kibana中查看到对应的index:

GET _cat/indices

图3.13 截图13

(15)查看该index详细内容,数据总量及数据内容一致,即存入elasticsearch中成功:

GET logstash-dengqy-2017.09.08/_search

图3.14 截图14

https://blog.csdn.net/remoa_dengqinyi/article/category/7153570

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325078082&siteId=291194637