转载请表明出处 https://blog.csdn.net/Amor_Leo/article/details/83144739 谢谢
ELK6.4.1以及Elasticsearch6.4.1集群Docker搭建
搭建Elasticsearch集群(三个节点)
docker pull elasticsearch:6.4.1
修改配置
- 修改配置 sysctl.conf
vi /etc/sysctl.conf
- 添加下面配置:
vm.max_map_count=655360
- 执行命令:
sysctl -p
准备配置文件
- es-1.yml
#集群名
cluster.name: ESCluster
#节点名
node.name: node-128-1
#设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0,
#指绑定这台机器的任何一个ip
network.bind_host: 0.0.0.0
#设置其它节点和该节点交互的ip地址,如果不设置它会自动判断,
#值必须是个真实的ip地址
network.publish_host: 192.168.0.128
#设置对外服务的http端口,默认为9200
http.port: 9200
#设置节点之间交互的tcp端口,默认是9300
transport.tcp.port: 9300
#是否允许跨域REST请求
http.cors.enabled: true
#允许 REST 请求来自何处
http.cors.allow-origin: "*"
#节点角色设置
node.master: true
node.data: true
#有成为主节点资格的节点列表
#discovery.zen.ping.unicast.hosts: ["0.0.0.0:9300","0.0.0.0:9301","0.0.0.0:9302"]
discovery.zen.ping.unicast.hosts: ["192.168.0.128 :9300","192.168.0.128:9301","192.168.0.128:9302"]
#集群中一直正常运行的,有成为master节点资格的最少节点数(默认为1)
# (totalnumber of master-eligible nodes / 2 + 1)
discovery.zen.minimum_master_nodes: 2
- es-2.yml
#集群名
cluster.name: ESCluster
#节点名
node.name: node-128-2
#设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0,
#指绑定这台机器的任何一个ip
network.bind_host: 0.0.0.0
#设置其它节点和该节点交互的ip地址,如果不设置它会自动判断,
#值必须是个真实的ip地址
network.publish_host: 192.168.0.128
#设置对外服务的http端口,默认为9200
http.port: 9201
#设置节点之间交互的tcp端口,默认是9300
transport.tcp.port: 9301
#是否允许跨域REST请求
http.cors.enabled: true
#允许 REST 请求来自何处
http.cors.allow-origin: "*"
#节点角色设置
node.master: true
node.data: true
#有成为主节点资格的节点列表
#discovery.zen.ping.unicast.hosts: ["0.0.0.0:9300","0.0.0.0:9301","0.0.0.0:9302"]
discovery.zen.ping.unicast.hosts: ["192.168.0.128 :9300","192.168.0.128:9301","192.168.0.128:9302"]
#集群中一直正常运行的,有成为master节点资格的最少节点数(默认为1)
# (totalnumber of master-eligible nodes / 2 + 1)
discovery.zen.minimum_master_nodes: 2
- es-3.yml
#集群名
cluster.name: ESCluster
#节点名
node.name: node-128-3
#设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0,
#指绑定这台机器的任何一个ip
network.bind_host: 0.0.0.0
#设置其它节点和该节点交互的ip地址,如果不设置它会自动判断,
#值必须是个真实的ip地址
network.publish_host: 192.168.0.128
#设置对外服务的http端口,默认为9200
http.port: 9202
#设置节点之间交互的tcp端口,默认是9300
transport.tcp.port: 9302
#是否允许跨域REST请求
http.cors.enabled: true
#允许 REST 请求来自何处
http.cors.allow-origin: "*"
#节点角色设置
node.master: true
node.data: true
#有成为主节点资格的节点列表
#discovery.zen.ping.unicast.hosts: ["0.0.0.0:9300","0.0.0.0:9301","0.0.0.0:9302"]
discovery.zen.ping.unicast.hosts: ["192.168.0.128 :9300","192.168.0.128:9301","192.168.0.128:9302"]
#集群中一直正常运行的,有成为master节点资格的最少节点数(默认为1)
# (totalnumber of master-eligible nodes / 2 + 1)
discovery.zen.minimum_master_nodes: 2
配置ik中文分词器
- 去GitHub页面下载对应的ik分词zip包
- 把ik压缩包复制到你的liunx系统 (我是用xftp) /usr/conf/elasticsearch/ 路径下
unzip -d /usr/conf/elasticsearch/elasticsearch1/ik elasticsearch-analysis-ik-6.4.1.zip
unzip -d /usr/conf/elasticsearch/elasticsearch2/ik elasticsearch-analysis-ik-6.4.1.zip
unzip -d /usr/conf/elasticsearch/elasticsearch3/ik elasticsearch-analysis-ik-6.4.1.zip
- 最后把elasticsearch-analysis-ik-6.4.1.zip 删除
rm -rf /usr/conf/elasticsearch/elasticsearch-analysis-ik-6.4.1.zip
赋权chmod 777 /usr/conf/usr/conf/elasticsearch/elasticsearch1/ik 剩下两个也是这样
创建容器并运行
docker run -d --name ES1 -p 9200:9200 -p 9300:9300 -v /usr/conf/es-1.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /usr/conf/data1:/usr/share/elasticsearch/data -v /usr/conf/elasticsearch/elasticsearch1:/usr/share/elasticsearch/plugins --privileged=true elasticsearch:6.4.1
docker run -d --name ES2 -p 9201:9201 -p 9301:9301 -v /usr/conf/es-2.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /usr/conf/data2:/usr/share/elasticsearch/data -v /usr/conf/elasticsearch/elasticsearch2:/usr/share/elasticsearch/plugins --privileged=true elasticsearch:6.4.1
docker run -d --name ES3 -p 9202:9202 -p 9302:9302 -v /usr/conf/es-3.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /usr/conf/data3:/usr/share/elasticsearch/data -v /usr/conf/elasticsearch/elasticsearch3:/usr/share/elasticsearch/plugins --privileged=true elasticsearch:6.4.1
- -v 后面的/usr/conf以及子目录是自己创建文件夹(mkdir 文件夹名)并赋予权限 chmod 777 /usr/conf/
- data1,data2以及data3是空的文件夹 chmod 777 /usr/conf/data1 chmod 777 /usr/conf/data2 chmod 777 /usr/conf/data3
- 查看ik分词是否加载成功
docker logs ES1
出现以下信息,则成功
[node-128-2] loaded plugin [analysis-ik]
安装head插件
docker pull mobz/elasticsearch-head:5
docker run --name es-head -p 9100:9100 -d docker.io/mobz/elasticsearch-head:5
- 测试 Elasticsearch 是否启动成功 http://192.168.0.128:9202 (ip为自己虚拟机的ip)页面会显示以下信息
{
"name" : "node-128-3",
"cluster_name" : "ESCluster",
"cluster_uuid" : "fHM6eysXQsqYEjyddfzamg",
"version" : {
"number" : "6.4.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "e36acdb",
"build_date" : "2018-09-13T22:18:07.696808Z",
"build_snapshot" : false,
"lucene_version" : "7.4.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
- head可视化界面 http://192.168.0.128:9100/ (ip为自己虚拟机的ip)
安装kibana
- 拉取镜像
docker pull kibana:6.4.1
- 创建运行docker容器
docker run -d --name kibana1 -e "ELASTICSEARCH_URL=http://192.168.0.128:9200" -p 5601:5601 kibana:6.4.1
其中 192.168.0.128为ES集群其中一个节点的ip
安装logstash
- 拉取镜像
docker pull docker.elastic.co/logstash/logstash:6.4.1
- 配置文件
- /usr/conf/logstash/conf.d/logstash.conf
input { file { path => "/tmp/access_log" start_position => "beginning" } }output { elasticsearch { hosts => ["192.168.0.128:9200"] ## ES集群其中一个ip地址 user => "root" ##用户名 password => "root" ## 密码 } }
- /usr/conf/logstash/logstash.yml
http.host: "0.0.0.0" path.config: /usr/share/logstash/pipeline xpack.monitoring.elasticsearch.url: http://192.168.0.128:9200 xpack.monitoring.elasticsearch.username: root xpack.monitoring.elasticsearch.password: root
- 创建运行容器
docker run -v /usr/conf/logstash/conf.d:/usr/share/logstash/pipeline -v /usr/conf/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml -p 5000:5000 -p 5044:5044 -p 9600:9600 --name logstash --privileged=true -d docker.elastic.co/logstash/logstash:6.4.1