ELK deployment process and Common Problems
ELK is an abbreviation of three open source software, respectively: Elasticsearch, Logstash, Kibana, they are open source software. Added a FileBeat, it is a lightweight log collection processing tools (Agent), Filebeat small footprint, suitable for transmission on each server logs to collect after Logstash, the government has also recommended this tool.
Elasticsearch is an open source distributed search engine that provides collection, analysis, storage of data three functions. Its features include: distributed, zero-configuration, auto-discovery, auto-slice index, index replication mechanism, restful style interfaces, multiple data sources, such as automatic load search.
Logstash mainly used to log collection, analysis, log filtering tools to support large amounts of data acquisition mode. General work of c / s architecture, client installed on the host side need to collect logs, server side is responsible for each node the received log is filtered, modification and other operations in a concurrent to elasticsearch up.
Kibana is also an open source and free tools, you can analyze Kibana friendly Web interface and log Logstash ElasticSearch provided to help summarize, analyze and search for important data logs.
Reference article: https: //blog.csdn.net/embbls/article/details/81388439 https://www.cnblogs.com/kevingrace/p/5919021.html
We went from empty to deploy servers, operating systems linux centos7 64
first step
In the first official website down various installation package -elasticSearch, logstash, kibna, and easy to manage in the same directory where the landlord put the file under / home / elk, official website: https: //www.elastic.co/downloads
Configure the environment to install jdk node js, git
JDK Configuration
1, first download tar.gz archive, enter the official website to download: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.htmlSelect this download;
after download open a command line, switch to the / usr / local / path, the new / usr / local / java folder, please ignore the existing, then downloaded files into that folder, enter the file name to extract the tar -zxvf
cd / usr / local
mkdir the Java
tar -zxvf the JDK-8u201-Linux-x64.tar.gz
after extraction is completed to add environment variables
vim / etc / profile
file open press i to enter edit mode
in Finally, the input file
export JAVA_HOME = / usr / local / java / jdk1.8.0_161 // here is the name of the file you
export CLASSPATH =
JAVA_HOME/lib/
export PATH=
JAVA_HOME / bin
and then press the Esc button input:, enter wq (save and exit means) after the Enter to exit the
input source / etc / profile
when the profile execution environment variables to take effect
last input validation java -version
Configuring Node Js
In the node js official website to download the latest linux package URL: https: //nodejs.org/en/
Once you have downloaded, into the server, enter the file name to extract the tar -zxvf
the xz -d node10.tar.zx
tar -xvf node10.tar
after extraction is completed to add environment variables
vim / etc / profile
file open press i to enter edit mode
at the end of the input file
export PATH = / home / elk / node10 / bin // fill in where travel documents
and press the Esc button input:, enter wq (save and exit means) after the enter to exit the
input source / etc / profile
environment variables to take effect when a configuration file
final validation input node -v
Install git
Yum install git direct input on the command line and then wait for the installation to completeDownload elastichsearch-head plug
Download: https: //github.com/mobz/elasticsearch-head input git clone https://github.com/mobz/elasticsearch-head.git (and other packages directly on the command line in the same folder, facilitate management)Download elasticSearch, logstash, kibna
Official website: https: //www.elastic.co/downloads
Download the complete reunification folder to unzip
tar -zxvf elasticsearch-6.7.0.tar.gz
tar -zxvf kibana-6.7.0-Linux-x86_64.tar.gz
tar -zxvf logstash-6.7.0.tar.gz
open elasticsearch- 6.7.0 into the config folder based on the current memory settings of memory devices
by default, if you have not started to change a little smaller 1G, and
then open the file elasticsearch.yml
vim elasticsearch.yml
input parameters or to comment about the symbol of these parameters removing
cluster.name : # Huanqiu group name (a group with the group name must be the same)
node.name : consistent elk-node1 # node name, host name suggestions and
path.data: / data path data stored in # / es-data the files can be customized, please built
path.logs: / var / log / elasticsearch / # log path may be stored in the custom file, please built
bootstrap.mlockall: true # lock memory, not being used to swap go
network.host: 0.0.0.0 # network settings
http.port: 9200 # port
Usually also need to modify the virtual machine memory, as follows:
Vim /etc/sysctl.conf add this line
vm.max_map_count = 262144
and Run: sysctl -p
The second step
The previous configuration are ok after the open elastichsearch-head plugins folder where the file
Enter npm install the middle will be some error withdrawals can be ignored, then wait for the installation to completeAfter installation enter npm install [email protected] --ignore-scripts then install
the input after the installation is complete npm run start
appears that it shows the plug head has started successfully
entered in the browser http://119.3.89.10:9100 / ip switch ip here to be your server
this plug-in has started normally
Since Elasticsearch can receive user input and execute scripts, for system security reasons, the root account is not allowed to start, it is recommended to Elasticsearch create a separate user to run Elasticsearch.
If you are currently root account will appear the following error message
to create a new user
command format useradd ymq (user name) -g ymq (belongs to the group name) -p ymq (password)
to enter groupadd elk (group name) to create a new user group
useradd elk1 (user name) -g elk (belongs to the group name) -p 123456 (password) to add a user
there is also a need to give the user permissions, otherwise it has no right to execute Elasticsearch
will be refused permission java.nio.file.AccessDeniedException error
authorization command format: chown -R elk1 (belongs to user): elk (user group name) /elk/elasticsearch-6.7.0 (the file path you want to change)
I have here is the input chown -R elk1: elk / elk / elasticsearch -6.7.0
Note: the average user needs to elasticsearch.yml folder
path.data: / data / es-data storage data path # please the customizable document built
path.logs: / data / log / path elasticsearch / # log stored in the file can be custom built, please
these two parameters format as the file permissions
and then switch to normal account su elk1
Then to the folder where the file elasticsearch-6.7.0 directory into the bin
input ./elasticsearch
to search it started successfully
under verification can be
successful shots are as follows:
Start you may encounter the following problems:
(pit 1:
[***] failed The Master to the send the Join Request to
reason: you copy a node elasticsearch file folder, but there is a data file that contains the node
Solution: delete elasticsearch folder data directory content
Pit 2:
ERROR: Bootstrap Checks failed The
Memory locking requested for elasticsearch IS Memory Process But not Locked
reasons: Lock memory failed
to solve: vim /etc/security/limits.conf Add the following two lines
- soft memlock unlimited
- MEMLOCK Unlimited Hard
Tips: * represents all users in linux
Pit 3:
ERROR: Bootstrap Checks failed The
max File descriptors [4096] for elasticsearch Process IS TOO Low, Increase to AT Least [65536]
reason: Unable to create a local file, the biggest users can create a number of files is not enough
to solve: vim / etc / security / limits.conf
* hard nofile 65536
* soft nofile 65536
* soft nproc 65536
* hard nproc 65536
Pit 4:
max Number The Threads of [1024] for the User [es] IS TOO Low, Increase to AT Least [2048]
The reason: Unable to create thread-local problem, users can create a maximum number of threads is too small
to solve: vim / etc / security / limits.d / 90-nproc.conf
- soft nproc 2048 (here originally 1024)
Pit 5:
max Virtual Memory Areas vm.max_map_count [65530] IS TOO Low, Increase to AT Least [262144]
Reason: The maximum virtual memory is too small
to solve: vim /etc/sysctl.conf add the following line
vm.max_map_count = 262144
and execute the command: sysctl -p
Pit 6:
System Call failed The Filters to install; and the Check at The FIX your logs or disable the Configuration System Call Filters AT your own Risk
reasons: Centos6 does not support SecComp
solve: vim elasticsearch.yml Remember to add the following memory lock
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
Pit 7:
Uncaught Exception in the Thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: not CAN RUN AS elasticsearch root
reasons: elasticsearch not use the root user to start
Solution: Create a new user (useradd elasticsearch) and chown - R elasticsearch: elasticsearch elasticsearch-5.2.2 [elasticsearch change user and folder belonging group]
坑8:
unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in
at org.elasticsearch.bootstrap.SystemCallFilter.linuxImpl(SystemCallFilter.java:350) ~[elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.bootstrap.SystemCallFilter.init(SystemCallFilter.java:638) ~[elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.bootstrap.JNANatives.tryInstallSystemCallFilter(JNANatives.java:215) [elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.bootstrap.Natives.tryInstallSystemCallFilter(Natives.java:99) [elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:111) [elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:204) [elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) [elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) [elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) [elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) [elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) [elasticsearch-5.4.0.jar:5.4.0]
org.elasticsearch.cli.Command.main AT (Command.java:88) [elasticsearch-5.4.0.jar: 5.4.0]
AT org.elasticsearch.bootstrap.Elasticsearch.main (Elasticsearch.java:91) [elasticsearch- 5.4.0.jar: 5.4.0]
AT org.elasticsearch.bootstrap.Elasticsearch.main (Elasticsearch.java:84) [elasticsearch-5.4.0.jar: 5.4.0]
reasons: linux kernel version is too low
to solve: cool cool, another new version of linux system right.
Pit 9:
Unsupported Version 52.0 major.minor
reasons: java version is too low
Solution: Replace the jdk version, ElasticSearch5.0.0 support jdk1.8.0 more
坑10:
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: Property [elasticsearch.version] is missing for plugin [head]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.2.2.jar:5.2.2]
Caused by: java.lang.IllegalArgumentException: Property [elasticsearch.version] is missing for plugin [head]
at org.elasticsearch.plugins.PluginInfo.readFromProperties(PluginInfo.java:104) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.plugins.PluginsService.getPluginBundles(PluginsService.java:292) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.plugins.PluginsService.(PluginsService.java:131) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.node.Node.(Node.java:297) ~[elasticsearch-5.2.2.jar:5.2.2]
org.elasticsearch.node.Node AT (Node.java:232) ~ [elasticsearch-5.2.2.jar: 5.2.2].
.. 6 $ AT org.elasticsearch.bootstrap.Bootstrap (Bootstrap.java:241) ~ [elasticsearch -5.2.2.jar: 5.2.2]
AT org.elasticsearch.bootstrap.Bootstrap.setup (Bootstrap.java:241) ~ [elasticsearch-5.2.2.jar: 5.2.2]
AT org.elasticsearch.bootstrap.Bootstrap .init (Bootstrap.java:333) ~ [elasticsearch-5.2.2.jar: 5.2.2]
AT org.elasticsearch.bootstrap.Elasticsearch.init (Elasticsearch.java:121) ~ [elasticsearch-5.2.2.jar: 5.2.2]
... 6 More
reason: the new version elasticsearch is not allowed to install the plug-in file directory plugins below
) reference: https://blog.csdn.net/u013641234/article/details/80792416
Once rebooted servers PS :. connector still not found, or not properly connected to elasticsearch service, because of the presence service and between elasticsearch elasticsearch-head may span, arranged to modify elasticsearch, add the following named elastichsearch.yml to: http.cors.enabled: to true
http.cors.allow-Origin: "*"
when you configure or check the inside of the first step under No written a
note to please the average user permissions in the first step of the configuration path
third step
Start logstash
Configuration logstash environment variables vim / etc / profile at the bottom of the file with export PATH = $ PATH: /home/elk/logstash-6.7.0/bin then save and exit operations such → Esc -: - wq - enter input source / etc / profile to take effect open directory logstash into the config folder configuration editing configuration files logstash.yml the http.host http.port log.level comments after removing saved back to the bin directory of other vim configuration information node.name: node name , for easy identification Path.Data: persistent data storage folder in the default data path.config logstash home directories: directory pipeline configuration setting file (if specified folder, the default folder for all .conf files in alphabetical order splicing of a document) path.log: setting the log file directory pipeline.workers pipeline: setting the number of threads pipeline (filter + output), the optimized conventional items pipeline.batch.size / delay: setting queue.type delay data and batch process data: setting queue type, default memory queue.max_bytes: total queue capacity, default 1gConfiguration parameters in detail with reference to the great god of the blog, want to learn more about can go and see https://blog.csdn.net/len9596/article/details/82884507
start-up mode:
-e: Specifies the logstash configuration information, you can use in the quick test;
-f: logstash specified profile; ready for production;
specified by the configuration information logstash -e parameters, for rapid testing, is directly output to the screen. -Quiet: quiet mode output log
$ logstash -e "input {stdin { }} output {stdout {}}" --quiet
By -f parameter specifies logstash configuration file can be used in the production environment
New log.conf file
log.conf contents of the documents:
input {
file {
type => "hadoop-yarnlog"
path => "/usr/local/hadoop/logs/yarn-hadoop-resourcemanager-m000.log"
}
}
output {
elasticsearch {
hosts => "m000:9200"
index => "logstash-%{type}-%{+YYYY.MM.dd}"
template_overwrite => true
}
}
logstash -f log.conf
Finally recommend a logstash Detailed instructions for use and, like the small partners can get to know the next https://blog.csdn.net/wfs1994/article/details/80862225
kibana 启动
After what this relatively simple configuration is not required, the 5601 port opening out to the bin directory, enter ./kibana to runTips
Landlord found in the course shell startup window closed the service will shut down, so the friendship recommend an article https://www.cnblogs.com/cute/p/5015852.html the great God spoke about the use of the screen, help you a lot yoUse test procedure
After the above-mentioned good build a new file log.conf in logstash folder name can be arbitrary, edit the file inputinput{
stdout{
}
}
filter{
}
output{
elasticsearch {
index => "flow-%{+YYYY.MM.dd}"
hosts => ["localhost:9200"]
}
stdout{
codec => rubydebug
}
}
The above means logstash input terminal, an output port 9200 to the local flow index is added to the current time