ELK build (docker environment)

ELK is Elasticsearch, Logstash, Kibana short, these three are the core suite, but not all.

Elasticsearch real-time full-text search and analysis engine that provides collection, analysis, storage of data three functions; REST is an open and JAVA API and other structures to provide efficient search capabilities, scalable distributed systems. It is built on top of Apache Lucene search engine library.

Logstash is used to collect, analyze, log filtering tools. It supports almost any type of log, including system logs, error logs, and custom application log. It can receive log from many sources, these sources include the syslog, messaging (e.g. RabbitMQ) and the JMX, it is possible to output data in a variety of ways, including e-mail, and WebSockets Elasticsearch.

Kibana is a Web-based graphical interface for data relevant to the log, is stored in the analysis and visualization Elasticsearch Index. It uses Elasticsearch REST interface to retrieve the data, not only allows users to create their own customized dashboard view of data, but also allows them a special way to query and filter data

First, reference materials

Second, download the docker mirror

docker pull elasticsearch:7.6.0
docker pull kibana:7.6.0
docker pull logstash:7.6.0
docker pull filebeat:7.6.0
docker pull mobz/elasticsearch‐head:5

Third, build ELK log system

Create a folder elk, the following configuration files are on the inside

mkdir /home/elk

3.1 Installation elasticsearch

Create a elasticsearch.ymlfile

vi /home/elk/elasticsearch.yml

There was added as follows:

cluster.name: "docker-cluster"
network.host: 0.0.0.0
# 访问ID限定,0.0.0.0为不限制,生产环境请设置为固定IP
transport.host: 0.0.0.0
# elasticsearch节点名称
node.name: node-1
# elasticsearch节点信息
cluster.initial_master_nodes: ["node-1"]
# 下面的配置是关闭跨域验证
http.cors.enabled: true
http.cors.allow-origin: "*"

Create and start elasticsearch container

docker run -di -p 9200:9200 -p 9300:9300 --name=elasticsearch -v /home/elk/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml elasticsearch:7.6.0

After passing through the 9200 port range on the browser, the successful return information

Note: If you need to add the widget to map the plug container to the directory path or via actual commands (such as word breakers installed ik: docker cp ik elasticsearch:/usr/share/elasticsearch/plugins/) copy it to the vessel

Possible problems encountered

1. After a successful start, stop after a while

This is related to our just modified configuration because elasticsearch at startup will conduct some tests, such as the number of virtual memory and up to open files of
the number of regions, etc., if you let go of this configuration, implies the need to open more files and virtual memory, so we also need system tuning.

  • Modify /etc/security/limits.conf, add the following:

      * soft nofile 65536
      * hard nofile 65536

nofile is a single process to allow the maximum number of open files is soft nofile hard nofile soft limit is a hard limit

  • Modify /etc/sysctl.conf, additional content

      vm.max_map_count=655360

    Quantitative restrictions can have a process of VMA (virtual memory area)

Execute the following command to modify kernel parameters take effect immediately, and then restart the server service docker

sysctl ‐p
2. The following log information is displayed failed to start
ERROR: [1] bootstrap checks failed
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

This is due to the elasticsearch7 is a multi-node cluster version, you need to add the following configuration in elasticsearch.yml in:

# elasticsearch节点名称
node.name: node-1
# elasticsearch节点信息
cluster.initial_master_nodes: ["node-1"]
3. Extranet has been able to push data and query data via port 9200, but components such as rear mounted kibana can not even

View logs found the following errors

error=>"Elasticsearch Unreachable: [http://192.168.6.128:9200/][Manticore::...

This problem is usually caused by under docker container is installed on a machine, the firewall turned on, the internal docker container can not access the host service (other LAN computers can access non-host services), the solution:

    1. ConfigureFirewallRules firewall-cmd --zone = public --add-port = {port} / tcp --permanent, firewall rules and override firewall-cmd --reload
    1. Use --net host mode (Docker four kinds of network modes: starting container https://www.jianshu.com/p/22a7032bb7bd )
    1. Turn off the firewall

Elasticsearch-head installation using plug-ins for debugging (can not install)

docker run ‐di ‐‐name=es-head ‐p 9100:9100 mobz/elasticsearch‐head:5

After starting the 9100 port can be used successfully access the management interface of elasticsearch.

Installed on the local computer
  1. Download Plug-head: https://github.com/mobz/elasticsearch-head
  2. The grunt installed as global commands. Grunt is a build tool based on Node.js project. It can automatically run the task you set npm install -g grunt-cli

  3. Installation depends

     npm install
  4. start up

     grunt server

    Open your browser, type http: // localhost: 9100

3.2 AnSo kibana

kibana elasticsearch mainly used to analyze the data view. The version must be selected and elasticsearch versions of the same or lower, the same recommendations and elasticsearch version, otherwise it will not be unable to use kibana.

Creating a kibana.ymlconfiguration file, write the following configuration in which:

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elasticsearch的IP:9200"]
# 操作界面语言设置
i18n.locale: "zh-CN"

Create and start kibana container

docker run -di --name kibana -p 5601:5601 -v /home/elk/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:7.6.0

After starting a successful visit to 5601 to enter the port kibana management interface. (After entering the required configuration selected, you can directly choose their own browser)

Adding index configuration

Enter; and then install the operating logstash come back log*to see the log information to select the log information logstash to create success

3.3 Installation logstash

Create a logstash.confprofile, add the following configuration in which:

input {
    tcp {
        port => 5044
        codec => "plain"
    }
}
filter{

}
output {
    # 这个是logstash的控制台打印(进行安装调试的开启,稍后成功后去掉这个配置即可)
    stdout {
        codec => rubydebug
    }
    # elasticsearch配置
    elasticsearch {
        hosts => ["elasticsearch的IP:9200"]
    }
}

Create and start logstash container

docker run -di -p 5044:5044 -v /home/elk/logstash.conf:/usr/share/logstash/pipeline/logstash.conf --name logstash logstash:7.6.0

Log micro push services in logstash

Add maven dependence

    <dependency>
        <groupId>net.logstash.logback</groupId>
        <artifactId>logstash-logback-encoder</artifactId>
        <version>6.3</version>
    </dependency>

In the following springboot logback as log processing, configuration files ( logback-spring.xml) configuration information as follows:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <!-- 控制台输出 -->
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <layout class="ch.qos.logback.classic.PatternLayout">
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %highlight(%-5level) %cyan(%logger{50}.%M.%L) - %highlight(%msg) %n</pattern>
        </layout>
    </appender>
    <!--logback输出-->
    <appender name="STASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.6.128:5044</destination>
        <includeCallerData>true</includeCallerData>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <includeCallerData>true</includeCallerData>
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{80}.%M.%L - %msg %n</pattern>
            </providers>
        </encoder>
    </appender>

    <root level="INFO">
        <!--本地开发调试将控制台输出打开,同时将日志文件输出关闭,提高日志性能;线上部署请务必将控制台输出关闭-->
        <appender-ref ref="STDOUT"/>
        <appender-ref ref="STASH"/>
    </root>
</configuration>

Guess you like

Origin www.cnblogs.com/vchar/p/12319216.html