ELK log system to build monomer, based on version 7.1.1

ELK What is that?

In general, in order to improve service availability, servers need to deploy multiple instances, each instance is forwarded after balancing the load, but also old-fashioned way if the login server to tail -f xxx.log, the error log is not very likely appears in the current server, you also need to repeat this process until you find the logs in order to locate the problem. Fortunately, two or three units, hundreds of instances of it? Human and sometimes poor. ELK debut at this time, then what ELK yes?

ELK is distributed full-text search system log, containing Elasticsearch (index base) / Logstash (save a log filter to index Library) / Kibana (see front-end tool logs)

This article Description

This selection of the monomer structure, a plurality of components are deployed in monomeric form, are ELK assembly comprising ( Elasticsearch, , ),Logstash , there is a basis (logback implement interface) Kafka Client client is completed by the integration of these components log building monomer system used here is downloaded , rather than directly mounted, the reference portion may be arrangedKibanaZookeeperKafkatar.gz

The system architecture is shown below

Kafka used herein do buffer layer, preventing excessive writing the log, both the data write rate to keep up elasticsearch logstash and loss caused by

Follow-up will bring you points docker-compose the demo version of elk cluster level, so stay tuned!

Preparing the Environment

  • Ubuntu Server 18.04.2 (can be replaced with Centos 7, because of all Systemd), make sure you have enough memory and hard disk space
  • ELK 7.1.1
  • Kafka_2.12-2.2.0
  • Zookeeper use Kafka's own
  • Log generation demo, code, see https://github.com/HellxZ/LogDemo>

Problem summary

Please refer to the issue summary problems and solutions arising in the course of building a summary ELK , ELK question I basically testing process have been resolved

Kafka issues outside the network can not connect, please refer to

Building process

The following relative positions are downloading files, please note. Nohup used in another paper the way back, after each command is executed using ctrl + c to end will not affect the process, use the tail -f xxx.out View Log

Configuration Elasticsearch
tar zxvf elasticsearch-7.1.1-linux-x86_64.tar.gz # 解压
mkdir single-elk # 创建目录改名移动
mv elasticsearch-7.1.1  single-elk/elasticsearch
cd single-elk/elasticsearch && vim config/elasticsearch.yml # 进入配置文件夹,修改es配置文件

# 要修改的内容有
# 1.放开node.name注释,节点名可DIY
node.name: node-1
# 2.放开network.host,修改ip为当前服务器ip或0.0.0.0,供外部访问
network.host: 0.0.0.0
# 3.放开cluster.initial_master_nodes,指定当前node.name名称即可
cluster.initial_master_nodes: ["node-1"] #与node.name保持一致
bootstrap.memory_lock: true #锁内存,防止内存占用过大,官方推荐使用系统内存的一半,但不要达到32GB
# 保存退出

ES_JAVA_OPTS="-Xms512m -Xmx512m" bin/elasticsearch -d # 后台启动es,默认绑定端口号9200和9300,接口访问9200测试,9300为es集群之间通信

When prompted to log in

ERROR: [1] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked

If the current host Es can be assigned to more than 1G, you can not be provided bootstrap.memory_lock: true, naturally here say a test environment; formal environment or the need to limit memory, such as -Xmx -Xms top of the other official recommendation is 50% occupied by system memory, but Do not exceed 32G. May be limited by ES_JAVA_OPTS, If this does not solve the problem, please refer https://www.cnblogs.com/hellxz/p/11009634.html

Browser access http://192.168.87.133:9200/ (voluntarily replace ip)

Optional elasticsearch-head view on github latest edition to only five individual maintenance release have still not 76, and to the official issue, officials expressed temporarily no desire to upgrade, because generally available, except that each ES fragmentation and other information not match up. . . Look at their lack of interest in the way, do not use the plug-in installed es, the use of plug-google-chrome, as

Zookeeper configuration and Kafka

tar -zxvf kafka_2.12-2.2.0.tgz 
mv kafka_2.12-2.2.0 single-elk/kafka
cd single-elk/kafka; vim config/zookeeper.properties #修改下data-dir就可以了,保存退出
vim config/server.properties #修改kafka配置

#需要修改内容如下
listeners=PLAINTEXT://kafka:9092
advertised.listeners=PLAINTEXT://kafka:9092
#保存退出

#编辑/etc/hosts,在hosts文件中追加kafka映射,注意是内网ip需要替换
192.168.87.133 kafka  #其实写127.0.0.1也行 ==

nohup bin/zookeeper-server-start.sh config/zookeeper.properties > zookeeper.out & #启动zookeeper,出现绑定端口号即成功
nohup bin/kafka-server-start.sh config/server.properties > kafka.out & #启动kafka
#日志中出现started即成功

Here write kafka is because when the external network access needed to use the domain name change ip, registered to the zookeeper in key contains a domain name, it is used herein, is to modify / etc / hosts of the way, I do not know other people is how to solve this the problem, if you have a better way to welcome the comments below!

Configuration Logstash

tar zxvf logstash-7.1.1.tar.gz # 解压
mv logstash-7.1.1 single-elk/logstash
cd single-elk/logstash
vim config/logstash.conf # 创建配置文件,设置参数可参考logstash.sample.conf,详情见官网

Use a custom configuration file needs to be specified at startup.

# logstash.conf
# 本人配置的是使用kafka做了一层缓冲层,这个不用我多说了,请按需配置
input {
  kafka {
    bootstrap_servers => "192.168.87.133:9092"
    topics => ["all_logs"]
    group_id => "logstash"
    codec => json
  }
}
# 过滤器我没有配置,目前只是先看看效果
filter {
}

output {
  elasticsearch {
    hosts => ["192.168.87.133:9200"]
    index => "all-logs-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
  stdout {
    codec => rubydebug
  }
}

Save and exit

nohup bin/logstash -f config/logstash.conf > logstash.out & #后台启动Logstash

Saw logs appear Successfully started Logstash API endpoint {:port=>9600}, that a successful start

Placed Kibana

# Kibana是基于node.js的前端项目,不过我测试服务器没装node.js也正常启动,看来ELK这些都自带了依赖环境
tar zxvf kibana-7.1.1-linux-x86_64.tar.gz
mv kibana-7.1.1-linux-x86_64 single-elk/kibana
cd single-elk/kibana && vim config/kibana.yml
#kibana.yml 以下配置均为解开注释,修改而成,使用搜索修改
# 指定kibana占用端口号,如无其它端口需要,可以不开启,默认5601
server.port: 5601
# 指定kibana绑定的ip地址,这里写kibana所在的ip即可
server.host: "192.168.87.133"
# es节点的ip地址
elasticsearch.hosts: ["http://192.168.87.133:9200"]
# kibana.index是kibana在es中创建的索引,会保存搜索记录,无需改变
kibana.index: ".kibana"
# 设置查询es超时时间,单位为milliseconds必须为整数
elasticsearch.requestTimeout: 30000
# 其余按需配置
# 中文化
i18n.locale: "zh-CN"
#保存退出
nohup bin/kibana > kibana.out & # 后台启动kibana

Appear Server running at http://192.168.87.133:5601"similar words, start to complete, visit and see

Configuration test code

使用代码的环境需要设置hosts文件,映射到kafka为刚才的ip,我的是192.168.87.133 kafka

项目是springboot的直接启动DemoApplication就好,还可以通过访问TestController中的方法来生产日志。

Connection to node -1 could not be established. Broker may not be available.出现这个问题请参考【Kafka问题解决】Connection to xxx could not be established. Broker may not be available.检查配置文件。

如需修改kafka的映射名称,记得修改logback-spring.xml中的<producerConfig>bootstrap.servers=your-kafka-domain:kafka-port

查看效果

使用Elasticsearch-head

进入google-chrome的Elasticsearch-head插件,连接项目我们可以简单看到多出了几个绿色的0

当然了,由于这个插件官方没有对Elasticsearch 7.X进行优化支持,显示的图并不准确,好在我们可以看到索引和数据浏览

我的虚拟机没内存了。。所以集群状态变成yellow了,仅为只读状态,大家可以通过在配置文件中限制es的内存,默认是1G的。官方建议设置内存为系统内存的一半,但不要超过32GB,且最大内存与最小内存最好相等防止经常GC,参考自config/elasticsearch.yml

启动es的时候就需要指定内存参数, 如ES_JAVA_OPTS="-Xms512m -Xmx512m" bin/elasticsearch -d

使用kibana

此时我已经修复了内存不够的问题,配置文件也在上边更新了,大家可以放心测试

访问 <你的kibana-ip:5601> 我的是<192.168.87.133:5061>

Kibana默认是不会给我们添加展示的索引的,需要我们去匹配,这里我们去添加

下一步,选择@timestamp或者其它,我这里选择@timestamp,然后创建索引模式即可

索引创建完成,我们去仪表盘查看下,点这个Discover

结束

最近在做ELK日志系统这块,还做了个demo级的docker-compose的ELK集群,因为生产环境我们是不会将多个es和多个kafka放在同一台机器上的。稍后我会整理下发上来,供大家借鉴。

Guess you like

Origin www.cnblogs.com/hellxz/p/11059360.html