Article Directory
- Environmental preparation
-
- Start deployment
- 1. Configure the elasticsearch environment
- 2. Deploy elasticsearch software
- 3. Check the health and status of the cluster
- 4. Install the phantomjs front-end framework
- 5. Install elasticsearch-head
- 6. Modify the main configuration file
- 7. Modify the main configuration file
- 8. Start elasticsearch-head to start the server
- 9. Create indexes and types
- 10. Install logstash and do some log collection and output to elasticsearch
- 11. Use logstash to write information into elasticsearch for input and output docking
- 12. Log in to the 192.168.162.40 Apache host for connection configuration
- Kibana
Environmental preparation
Host | operating system | CPU name | IP address | Main software |
---|---|---|---|---|
server | Centos7.4 | node1 | 192.168.162.40 | Elasticsearch Kibana |
server | Centos7.4 | node2 | 192.168.162.50 | Elasticsearch |
server | Centos7.4 | apache | 192.168.162.60 | Logstash Apache |
Start deployment
1. Configure the elasticsearch environment
Log in to 192.168.162.40, change the host name, configure domain name resolution, and view the Java environment
[root@node1 ~]# hostnamectl set-hostname node1
[root@node1 ~]# vi /etc/hosts
192.168.162.40 node1
192.168.162.50 node2
[root@node1 ~]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_91
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
source /etc/profile
==Log in to 192.168.100.42, change the host name, configure domain name resolution, and check the Java environment (similar to node1 above, no screenshots will be taken) ==
[root@node2 ~]# hostnamectl set-hostname node2
[root@node2 ~]# vi /etc/hosts
192.168.162.40 node1
192.168.162.50 node2
[root@node2 ~]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
2. Deploy elasticsearch software
Log in to 192.168.162.40
2.1, install elasticsearch-rpm package
Upload elasticsearch-5.5.0.rpm to the /opt directory
[root@node1 ~]# cd /opt
[root@node1 opt]# rpm -ivh elasticsearch-5.5.0.rpm
[root@node1 opt]# systemctl daemon-reload
[root@node1 opt]# systemctl enable elasticsearch.service
2.2, change the elasticsearch main configuration file
[root@node1 opt]# cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak
[root@node1 opt]# vi /etc/elasticsearch/elasticsearch.yml
17/ cluster.name: my-elk-cluster ####集群名字
23/ node.name: node1 ####节点名字
33/ path.data: /data/elk_data ####数据存放路径
37/ path.logs: /var/log/elasticsearch/ ####日志存放路径
43/ bootstrap.memory_lock: false ####不在启动的时候锁定内存(前端缓存。与IOPS-性能测试方式,每秒读写次数相关)
55/ network.host: 0.0.0.0 ####提供服务绑定的IP地址,0.0.0.0代表所有地址
59/ http.port: 9200 ####侦听端口为9200
68/ discovery.zen.ping.unicast.hosts: ["node1", "node2"] ####集群发现通过单播实现
“单播”
[root@node1 opt]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk-cluster
node.name: node1
path.data: /data/elk_data
path.logs: /var/log/elasticsearch/
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node1", "node2"]
2.3. Create a data storage path and authorize
[root@node1 opt]# mkdir -p /data/elk_data
[root@node1 opt]# chown elasticsearch:elasticsearch /data/elk_data/
2.4. Whether elasticsearch is successfully started
[root@node1 elasticsearch]# systemctl start elasticsearch.service
[root@node1 elasticsearch]# netstat -antp |grep 9200
tcp6 0 0 :::9200 :::* LISTEN 64463/java
To view the node information, use the browser of the real machine 192.168.162.129 to open http://192.168.162.40:9200 There is a file open. Below is the node information
Log in to 192.168.162.50 (node2 and node1 are the same, no screenshots will be taken)
1、安装elasticsearch—rpm包
上传elasticsearch-5.5.0.rpm到/opt目录下
[root@node2 ~]# cd /opt
[root@node2 opt]# rpm -ivh elasticsearch-5.5.0.rpm
2、加载系统服务
[root@node2 opt]# systemctl daemon-reload
[root@node2 opt]# systemctl enable elasticsearch.service
3、更改elasticsearch主配置文件
[root@node2 opt]# cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak
[root@node2 opt]# vi /etc/elasticsearch/elasticsearch.yml
17/ cluster.name: my-elk-cluster ####集群名字
23/ node.name: node2 ####节点名字
33/ path.data: /data/elk_data ####数据存放路径
37/ path.logs: /var/log/elasticsearch/ ####数据存放路径
43/ bootstrap.memory_lock: false ####不在启动的时候锁定内存
55/ network.host: 0.0.0.0 ####提供服务绑定的IP地址,0.0.0.0代表所有地址
59/ http.port: 9200 ####侦听端口为9200
68/ discovery.zen.ping.unicast.hosts: ["node1", "node2"] ####集群发现通过单播实现
[root@node2 opt]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk-cluster
node.name: node2
path.data: /data/elk_data
path.logs: /var/log/elasticsearch/
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node1", "node2"]
4、创建数据存放路径并授权
[root@node2 opt]# mkdir -p /data/elk_data
[root@node2 opt]# chown elasticsearch:elasticsearch /data/elk_data/
5、启动elasticsearch是否成功开启
[root@node2 elasticsearch]# systemctl start elasticsearch.service
[root@node2 elasticsearch]# netstat -antp |grep 9200
tcp6 0 0 :::9200 :::* LISTEN 64463/java
6、查看节点信息 用真机192.168.100.1 的浏览器打开 http://192.168.100.42:9200 有文件打开 下面是节点的信息
{
"name" : "node2",
"cluster_name" : "my-elk-cluster",
"cluster_uuid" : "kWji5N02SvmMjKRzvKoMrw",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-06-30T23:16:05.735Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
3. Check the health and status of the cluster
Open http://192.168.100.41:9200/_cluster/health?pretty ###Check the cluster health in the real machine browser 192.168.100.1
Open http://192.168.100.41:9200/_cluster/state?pretty ###Check the cluster status information in the real machine browser 192.168.100.1
Log in to the 192.168.162.40 node1 host
上传node-v8.2.1.tar.gz到/opt
yum install gcc gcc-c++ make -y
###编译安装node组件依赖包##耗时比较长 20分钟
[root@localhost opt]# cd /opt
[root@node1 opt]# tar xzvf node-v8.2.1.tar.gz
[root@node1 opt]# cd node-v8.2.1/
[root@node1 node-v8.2.1]# ./configure
[root@node1 node-v8.2.1]# make -j3
[root@node1 node-v8.2.1]# make install
4. Install the phantomjs front-end framework
上传软件包到/usr/local/src/
[root@localhost node-v8.2.1]# cd /usr/local/src/
[root@localhost src]# tar xjvf phantomjs-2.1.1-linux-x86_64.tar.bz2
[root@localhost src]# cd phantomjs-2.1.1-linux-x86_64/bin
[root@localhost bin]# cp phantomjs /usr/local/bin
Here I forgot to take a screenshot of node1, so the screenshot of node2 is the same during operation
5. Install elasticsearch-head
[root@localhost bin]# cd /usr/local/src/
[root@localhost src]# tar xzvf elasticsearch-head.tar.gz
[root@localhost src]# cd elasticsearch-head/
[root@localhost elasticsearch-head]# npm install
6. Modify the main configuration file
[root@localhost ~]# cd ~
[root@localhost ~]# vi /etc/elasticsearch/elasticsearch.yml ####下面配置文件,插末尾##
http.cors.enabled: true ##开启跨域访问支持,默认为false
http.cors.allow-origin: "*" ## 跨域访问允许的域名地址
[root@localhost ~]# systemctl restart elasticsearch
7. Modify the main configuration file
[root@localhost ~]# cd ~
[root@localhost ~]# vi /etc/elasticsearch/elasticsearch.yml ####下面配置文件,插末尾##
http.cors.enabled: true ##开启跨域访问支持,默认为false
http.cors.allow-origin: "*" ## 跨域访问允许的域名地址
[root@localhost ~]# systemctl restart elasticsearch
8. Start elasticsearch-head to start the server
[root@localhost ~]# cd /usr/local/src/elasticsearch-head/
[root@localhost elasticsearch-head]# npm run start & ####切换到后台运行
[1] 114729
[root@localhost elasticsearch-head]#
> [email protected] start /usr/local/src/elasticsearch-head
> grunt server
Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100
[root@localhost elasticsearch-head]# netstat -lnupt |grep 9100
tcp 0 0 0.0.0.0:9100 0.0.0.0:* LISTEN 114739/grunt
[root@localhost elasticsearch-head]# netstat -lnupt |grep 9200
tcp6 0 0 :::9200 :::* LISTEN 114626/java
真机上打开浏览器输入http://192.168.162.40:9100/ 可以看见群集很健康是绿色#####
在Elasticsearch 后面的栏目中输入http://192.168.162.40:9200
然后点连接 会发现:集群健康值: green (0 of 0)
●node1信息动作
★node2信息动作
9. Create indexes and types
####登录192.168.162.40 node1主机##### 索引为index-demo,类型为test,可以看到成功创建
[root@node1 ~]# curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'content-Type: application/json' -d '{"user":"zhangsan","mesg":"hello world"}'
{
"_index" : "index-demo",
"_type" : "test",
"_id" : "1",
"_version" : 1,
"result" : "created",
"_shards" : {
"total" : 2,
"successful" : 2,
"failed" : 0
},
"created" : true
}
####Open the browser on the real machine and enter http://192.168.162.40:9100/ to view index information###
node1信息动作 01234
node2信息动作 01234
●上面图可以看见索引默认被分片5个,并且有一个副本
点击数据浏览--会发现在node1上创建的索引为index-demo,类型为test, 相关的信息
Log in to the 192.168.162.50 node2 host (the same operation as node1, no screenshots will be taken)
上传node-v8.2.1.tar.gz到/opt
###编译安装node组件依赖包##耗时比较长 47分钟
yum install gcc gcc-c++ make -y
[root@localhost opt]# cd /opt
[root@node2 opt]# tar xzvf node-v8.2.1.tar.gz
[root@node2 opt]# cd node-v8.2.1/
[root@node2 node-v8.2.1]# ./configure
[root@node2 node-v8.2.1]# make -j3
[root@node2 node-v8.2.1]# make install
####安装phantomjs####
上传软件包到/usr/local/src/
[root@node2 node-v8.2.1]# cd /usr/local/src/
[root@node2 src]# tar xjvf phantomjs-2.1.1-linux-x86_64.tar.bz2
[root@node2 src]# cd phantomjs-2.1.1-linux-x86_64/bin
[root@node2 bin]# cp phantomjs /usr/local/bin
###安装elasticsearch-head###
[root@node2 bin]# cd /usr/local/src/
[root@node2 src]# tar xzvf elasticsearch-head.tar.gz
[root@node2 src]# cd elasticsearch-head/
[root@node2 elasticsearch-head]# npm install
#####修改主配置文件###
[root@node2 ~]# cd ~
[root@node2 ~]# vi /etc/elasticsearch/elasticsearch.yml ####下面配置文件,插末尾##
http.cors.enabled: true ##开启跨域访问支持,默认为false
http.cors.allow-origin: "*" ## 跨域访问允许的域名地址
[root@localhost ~]# systemctl restart elasticsearch
####启动elasticsearch-head 启动服务器####
[root@node2 ~]# cd /usr/local/src/elasticsearch-head/
[root@node2 elasticsearch-head]# npm run start & ####切换到后台运行
[1] 114729
[root@localhost elasticsearch-head]#
> [email protected] start /usr/local/src/elasticsearch-head
> grunt server
Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100
[root@node2 elasticsearch-head]# netstat -lnupt |grep 9100
tcp 0 0 0.0.0.0:9100 0.0.0.0:* LISTEN 114739/grunt
[root@node2 elasticsearch-head]# netstat -lnupt |grep 9200
tcp6 0 0 :::9200 :::* LISTEN 114626/java
####真机上打开浏览器输入http://192.168.162.50:9100/ 可以看见群集很健康是绿色#####
在Elasticsearch 后面的栏目中输入http://192.168.162.50:9200
然后点连接 会发现:集群健康值: green (0 of 0)
●node1信息动作
★node2信息动作
10. Install logstash and do some log collection and output to elasticsearch
Log in to the host 192.168.162.60
1. Change the host name
hostnamectl set-hostname apache
2. Install Apahce service (httpd)
[root@apache ~]# yum -y install httpd
[root@apache ~]# systemctl start httpd
3. Install the Java environment
[root@apache ~]# java -version ###如果没有装 安装yum -y install java
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
4. Install logstash
上传logstash-5.5.1.rpm到/opt目录下
[root@apache ~]# cd /opt
[root@apache opt]# rpm -ivh logstash-5.5.1.rpm ##安装logstash
[root@apache opt]# systemctl start logstash.service ##启动logstash
[root@apache opt]# systemctl enable logstash.service
[root@apache opt]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/ ##建立logstash软连接
5. Whether the functions of logstash (Apache) and elasticsearch (node) are normal, do a docking test ####
Logstash command test
字段描述解释:
● -f 通过这个选项可以指定logstash的配置文件,根据配置文件配置logstash
● -e 后面跟着字符串 该字符串可以被当做logstash的配置(如果是” ”,则默认使用stdin做为输入、stdout作为输出)
● -t 测试配置文件是否正确,然后退出
6. The input adopts standard input and output adopts standard output-log in to 192.168.162.60 on the Apache server
[root@apache opt]# logstash -e 'input { stdin{} } output { stdout{} }'
。。。。。。。。。。省略。。。。。。。。
10:08:54.060 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
2018-10-12T02:08:54.116Z apache
10:08:54.164 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com ####需要输入www.baidu.com
2018-10-12T02:10:11.313Z apache www.baidu.com
www.sina.com.cn ####需要输入www.sina.com.cn
2018-10-12T02:10:29.778Z apache www.sina.com.cn
7. Use rubydebug to display detailed output, codec is a codec
[root@apache opt]# logstash -e 'input { stdin{} } output { stdout{ codec=>rubydebug } }'
。。。。。。。。省略。。。。。。。。。
10:15:07.665 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
The stdin plugin is now waiting for input:
10:15:07.693 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
10:15:07.804 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com
{
"@timestamp" => 2018-10-12T02:15:39.136Z,
"@version" => "1",
"host" => "apache",
"message" => "www.baidu.com"
}
11. Use logstash to write information into elasticsearch for input and output docking
Login 192.168.162.129 real machine
open a browser to view the index information input http://192.168.162.40:9100/ ###
extra logstash-2019.04.16
content browser to see the response of Hits
12. Log in to the 192.168.162.40 Apache host for connection configuration
logstash configuration file
The Logstash configuration file is mainly composed of three parts: input, output, and filter (as required)
[root@apache opt]# chmod o+r /var/log/messages
[root@apache opt]# ll /var/log/messages
-rw----r--. 1 root root 572555 4月 16 23:50 /var/log/messages
Defined in the configuration file is to collect system logs (system)
[root@apache opt]# vi /etc/logstash/conf.d/system.conf
input {
file{
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.162.40:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
[root@apache opt]# systemctl restart logstash.service
Log in to the real machine 192.168.162.129
Open the browser and enter http://192.168.162.40:9100/ to view index information###
More system-2019.04.16
Kibana
Log in to the 192.168.162.40 node1 host
上传kibana-5.5.1-x86_64.rpm 到/usr/local/src目录
[root@node1 ~]# cd /usr/local/src/
[root@node1 src]# rpm -ivh kibana-5.5.1-x86_64.rpm
[root@node1 src]# cd /etc/kibana/
[root@node1 kibana]# cp kibana.yml kibana.yml.bak
[root@node1 kibana]# vi kibana.yml
2/ server.port: 5601 #### kibana打开的端口
7/ server.host: "0.0.0.0" ####kibana侦听的地址
21/ elasticsearch.url: "http://192.168.162.40:9200" ###和elasticsearch建立联系
30/ kibana.index: ".kibana" ####在elasticsearch中添加.kibana索引
[root@node1 kibana]# systemctl start kibana.service ###启动kibana服务
[root@node1 kibana]# systemctl enable kibana.service ###开机启动kibana服务
Log in to the real machine 192.168.162.129
使用浏览器输入192.168.162.40:5601
首次登录创建一个索引 名字:system-* ##这是对接系统日志文件
Index name or pattern ###下面输入system-*
然后点最下面的出面的create 按钮创建
然后点最左上角的Discover按钮 会发现system-*信息
然后点下面的host旁边的add 会发现右面的图只有 Time 和host 选项了 这个比较友好
Apache log file (accessed, wrong) connected to the Apache host
[root@apache opt]# cd /etc/logstash/conf.d/
[root@apache conf.d]# touch apache_log.conf
[root@apache conf.d]# vi apache_log.conf
input {
file{
path => "/etc/httpd/logs/access_log"
type => "access"
start_position => "beginning"
}
file{
path => "/etc/httpd/logs/error_log"
type => "error"
start_position => "beginning"
}
}
output {
if [type] == "access" {
elasticsearch {
hosts => ["192.168.100.41:9200"]
index => "apache_access-%{+YYYY.MM.dd}"
}
}
if [type] == "error" {
elasticsearch {
hosts => ["192.168.100.41:9200"]
index => "apache_error-%{+YYYY.MM.dd}"
}
}
}
[root@apache conf.d]# /usr/share/logstash/bin/logstash -f apache_log.conf
Log in to the real machine 192.168.162.129
打开输入http://192.168.162.60
打开浏览器 输入http://192.168.162.40:9100/ 查看索引信息###
能发现
apache_error-2019.04.16 apache_access-2019.04.16
打开浏览器 输入http://192.168.162.40:5601
点击左下角有个management选项---index patterns---create index pattern
----分别创建apache_error-* 和 apache_access-* 的索引