ELK set up by docker in Linux

Reference documents: https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html

1. docker installation elasticsearch

1.1 docker command to download elasticsearch image

docker pull registry.docker-cn.com/library/elasticsearch:latest

1.2 from the command line, docker command to start elasticsearch

docker run -d --name es-zulan -p 9400:9200 -p 9500:9300 -v /yourpath/ELK/elasticsearch/data:/usr/share/elasticsearch/data -e "discovery.type=single-node" registry.docker-cn.com/library/elasticsearch:latest

1.3 Access  http://10.50.40.226:9400/  , the following appears it proves successful start elasticsearch 

10.50.180.226 docker is where Linux server's IP.

2. docker installation kibana

2.1 under 载 kibana image

docker pull docker.elastic.co/kibana/kibana:5.6.14

2.2 docker start kibana

docker run -d --name kb-zulan -p 9600:5601 --link es-zulan:registry.docker-cn.com/library/elasticsearch -e ELASTICSEARCH_URL=http://10.50.40.226:9400 docker.elastic.co/kibana/kibana:5.6.14

2.3 modify the config kibana

Interactively enter has started kibana container:

docker exec -it kb-zulan /bin/sh

Modify config file, comment out the default configuration for elasticsearch Figure:

vi config / kibana.yml

 

Heavy Sirs kibana container:

docker restart kb-zulan

2.4 browser to access http://10.50.40.226:9600

The emergence of a landing page, but can not enter.

The reason for this phenomenon is the start of elasticsearch image docker no x-pack, but there kibana image with x-pack.

There are two solutions:

(1) The inside of the x-pack kibana uninstall

Instructions to use docker:

docker exec -it kb-zulan /bin/sh

cd bin

./kibana-plugin remove x-pack

Exit

docker restart kb-zulan

(2) is mounted on the x-pack elasticsearch

Instructions to use docker:

docker exec -it es-zulan /bin/bash

cd bin

./elasticsearch-plugin install x-pack

Exit

docker restart es-zulan

x-pack by default have a 30-day trial period, after the trial expires, you need to purchase a license.

Note: After rebooting kibana or elasticsearch, need to wait for some time.

2.5 After entering kibana, you first need to create a new index pattern.

However, the prompt will pop up error "Unable to fetch mapping. Do you have indices matching the pattern?".

This requires the installation logstash, write data to elasticsearch.

 

3. docker installation logstash

3.1 docker download logstash image

docker pull docker.elastic.co/logstash/logstash:5.6.14

3.2 docker start logstash

docker run -d --name ls-zulan -p 9700:5044 -p 9800:9600 -v /yourpath/ELK/logstash/eventslogfiles:/usr/share/logstash/eventslogfiles docker.elastic.co/logstash/logstash:5.6.14

log files in the / yourpath / ELK / logstash / eventslogfiles.

3.3 browser to access http://10.50.40.226:9800  , appeared the following description logstash successful start

3.4 modify the config logstash

Interactively enter logstash container:

docker exec -it ls-zulan /bin/sh

Modification C onfig / logstash.yml and pipeline / logstash.conf:

Review pipeline / logstash.conf follows:

input {
  file{
    path => "/usr/share/logstash/eventslogfiles/*/*/*"
    start_position => "beginning"
  }
}

filter {
  grok {
    match => { "message" => "%{YEAR:ma_year}-%{WORD:ma_month}-%{WORD:ma_day},%{TIME:ma_time},%{WORD:ma_label} ,\[%{WORD:ma_title}\],%{WORD:ma_event}, ip: %{IP:ma_ip},%{WORD:ma_basePcn},%{WORD:ma_baseSN},%{WORD:ma_psdPcn},%{WORD:ma_psdSN},%{WORD:ma_bpn},\"%{DATA:ma_info}\"" }
    add_field => { "full_time" => "%{ma_year}%{ma_month}%{ma_day} %{ma_time}" }
  }
  date{

    match => ["full_time","yyyyMMdd HH:mm:ss,SSS"]

  }
}

output {
  elasticsearch {
    hosts => ["10.50.40.226:9400"]
    index => "logstash-ls-%{+YYYY-MM-DD}"
    user => elastic
    password=> changeme
  }
stdout { codec => rubydebug }

}

 

docker restart ls-zulan

Note:logstash中index是不能包含大写字母的,除了Logstash。

 

 4. 浏览器访问kibana http://10.50.40.226:9600

(1) 在kibana中,创建同logstash的index对应的index pattern。

(2)在Linux server上,将log文件复制到logstash挂在的文件夹/yourpath/ELK/logstash/eventslogfiles。logstash会自动将log data写入elasticsearch。

(3)在kibana中,在Management中刷新Index Patterns的field list,它会parse在pipeline/logstash.conf中配置的字段。

(4)在kibana中,点击Discover,查看数据。

 

Guess you like

Origin www.cnblogs.com/4-army/p/11277831.html
Recommended