Docker notes (X): Use to build a Docker ELK log analysis system

A period of time did not concern ELK (elasticsearch - search engines can be used to store, index logs, logstash - can be used for log transport, conversion, kibana - WebUI, log visualization), was found to have the latest version of the 7.4. So do not ask programmers why so busy? Because overtime is not in learning new framework.

This article compiled using a method to quickly build a Docker ELK log analysis system.

1. Deploy elk

On github it was finishing a set of elk use docker compose deployed configuration, direct download.

git clone https://github.com/deviantony/docker-elk.git

 

If you do not git, then install it ( yum install git), or directly download github repository source package.

 

It is based on the current version 7.2.1 (.env file docker-elk defined in the catalog can be modified).

 

Adjust the appropriate configuration.

 

Modify docker-compose, es set password,

Docker-compose.yml Vim 

# in the environment variables elasticsearch portion is provided, the heap memory jvm increased to 1g, the user password is provided es elastic
Environment:
ES_JAVA_OPTS: "-Xmx1g -Xms1g"
ELASTIC_PASSWORD: Passw0rd

# The port mapping from the default logstash 5000 to 5044, since the latter will use filebeat, can not change, the line corresponding to
the ports:
- "5044: 5044"
- "9600: 9600"

# will be increased a little memory jvm
Environment:
LS_JAVA_OPTS: "-Xmx512m -Xms512m "

# increase in volumes mount portion of the data directory es, es for data persistence, data loss avoid the destruction of container
volumes:
- / mnt / Elk / esdata: / usr / report this content share / elasticsearch / the data

Note: Because es inside the container is elasticsearch user to start the process of doing so when the persistent data directory is mounted, you need to set the directory permissions, or because there is no access to and failed to start. elasticsearch the uid is 1000, you can build a uid of 1000 users, the directory owner will then give the user.

 

Es modify the configuration file, the trial was changed from xpack basic, premium features disabled

vim elasticsearch/config/elasticsearch.yml

#xpack.license.self_generated.type: trial
xpack.license.self_generated.type: basic

 

Logstash modify configuration files, settings, user name and password of es

vim logstash/config/logstash.yml

xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: Passw0rd

 

The pipeline configuration to modify logstash

vim logstash/pipeline/logstash.conf

# 这里codec根据具体情况配置
input {
beats {
port => 5044
codec => "json"
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "Passw0rd"
}
}

 

Kibana modify configuration settings password es

vim kibana/config/kibana.yml

## X-Pack security credentials
elasticsearch.username: elastic
elasticsearch.password: Passw0rd

 

配置调整后,使用 docker-compose up -d 即可启动es,logstash,kibana三个容器。第一次启动需要下载所有镜像,会比较慢,启动完后,访问 elk所在服务器IP:5601即可进入kibana页面。

这里默认是起一个es容器,如果想起多个,参考: https://github.com/deviantony/docker-elk/wiki/Elasticsearch-cluster

 

2. 部署filebeat

filebeat部署在产生日志的服务器上。先下载镜像,

docker pull docker.elastic.co/kibana/kibana:7.3.1

 

下载一个示例配置文件

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.3/deploy/docker/filebeat.docker.yml

 

修改配置文件

vim filebeat.docker.yml

filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false

#filebeat.autodiscover:
# providers:
# - type: docker
# hints.enabled: true

#processors:
#- add_cloud_metadata: ~
#- add_host_metadata: ~

filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/elk/*.log

output.logstash:
hosts: ["你的elk服务器IP:5044"]

 

去掉了一些不必要的配置,基本就是一个input, 一个output。input paths部分配置你日志所在目录,注意这里是容器内的目录,真正服务器的日志目录需要在启动容器时挂载到这里配置的目录。

启动容器

docker run -d --name filebeat --user=root -v $(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro \
-v /mnt/logs/elk/:/var/log/elk/ docker.elastic.co/beats/filebeat:7.3.1 filebeat -e -strict.perms=false

 

对配置文件及实际日志目录与容器日志目录进行了挂载。

启动成功后,对应目录下的日志就会通过filebeat,logstash传输到es,进入kibana对日志数据建立索引进行查询了。

 

3. 总结

前面用elk来搭建日志分析系统还是5.1版,两年时间已到7.4,配置方式,包括UI风格都做了很大的调整,很有一种人间一年,技术圈十载的感觉。
本文整理了基于Docker来搭建ELK框架的整个过程,供参考。


—————————————————————————————
作者:空山新雨
欢迎关注我的微信公众号:jboost-ksxy (一个不只有技术干货的公众号)

 

Guess you like

Origin www.cnblogs.com/spec-dog/p/11489838.html