- ELK single, respectively a further two servers elasticsearch node, thus forming a 3 ES cluster node.
- Es can try to build a cluster or stand-alone ELK alone
- https://www.cnblogs.com/lz0925/p/12018209.html // single ELK
- https://www.cnblogs.com/lz0925/p/12011026.html // three-node cluster es
important point
- Server memory: requires no less than 8G, if 4G, did not run other programs, it should also be lower than 4G should not be concerned.
- My system: Ali cloud centOS7.1, execute cat / proc / version View linux version, can not be lower than 3.10
- Server List: 172.168.50.41,172.168.50.40,172.168.50.240 (ELK built on 50.41, the other two servers es node)
First, install and docker docker-compose
# 更新yum
yum update
# 移除docker旧版本(如果有的话)
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
# 安装系统依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加docker源信息(下载速度比较快)
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 更新yum缓存
yum makecache fast
# 安装docker-ce
yum -y install docker-ce
# 启动docker后台服务
sudo systemctl start docker
# 配置阿里云镜像加速器(建议进行配置, 这里加速器地址仅用于展示,无加速功能,请使用自己的阿里云加速器,教程见百度,加速器免费)
mkdir /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://6y4h812t.mirror.aliyuncs.com"]}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
# 安装docker-compose
cd /usr/local/src
sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
# 至此,docker 和docker-compose安装完毕, 如有错误,请留言,毕竟是之前写的自动安装脚本copy过来的。
ES Set up a three-node cluster, the following steps need to be executed in three servers
- 1, docker-compose, create a directory to store files yml
建立文件夹,/root/elk(随意即可)
mkdir /root/elk
- 2. Create a docker-compose.yml file
cd /root/elk
touch docker-compose.yml
-3, file content below docker-compose.yml
version: '3'
services:
elasticsearch: # 服务名称
image: elasticsearch:7.3.1 # 使用的镜像
container_name: elasticsearch # 容器名称
restart: always # 失败自动重启策略
environment:
- node.name=41 # 节点名称,集群模式下每个节点名称唯一
- network.publish_host=172.168.50.41 # 用于集群内各机器间通信,其他机器访问本机器的es服务
- network.host=0.0.0.0 # 设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0,
- discovery.seed_hosts=172.168.50.40,172.168.50.240,172.168.50.41 # es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
- cluster.initial_master_nodes=172.168.50.40,172.168.50.240,172.168.50.41 # es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
- cluster.name=es-cluster # 集群名称,相同名称为一个集群
# - http.cors.enabled=true # 是否支持跨域,是:true // 这里设置不起作用,但是可以进入容器里面在elasticsearch.yml 文件中添加,或者将此文件映射到宿主机进行修改,然后重启,解决跨域
# - http.cors.allow-origin="*" # 表示支持所有域名 // 这里设置不起作用,但是可以进入容器里面在elasticsearch.yml 文件中添加,或者将此文件映射到宿主机进行修改,然后重启,解决跨域
- bootstrap.memory_lock=true # 内存交换的选项,官网建议为true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" # 设置内存
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /root/elk/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml # 将容器中es的配置文件映射到本地,设置跨域, 否则head插件无法连接该节点
- esdata:/usr/share/elasticsearch/data # 存放数据的文件, 注意:这里的esdata为 顶级volumes下的一项。
ports:
- 9200:9200 # http端口
- 9300:9300 # es节点直接交互的端口,非http
volumes:
esdata:
driver: local # 会生成一个对应的目录和文件,如何查看,下面有说明。
- 4, the other two servers can also be configured shining this configuration, but the IP, as well as the name of the node need to change it.
- 5, memory settings, three servers are required
两种方式:
1、机器重启后需再次设置
sysctl -w vm.max_map_count=262144
2、直接修改配置文件, 进入sysctl.conf文件添加一行(解决容器内存权限过小问题)
vi /etc/sysctl.conf
sysctl vm.max_map_count=262144
sysctl -p 立即生效
-6, es profiles
- "/ Root / elk" execute vim elasticsearch.yml in the current directory
- elasticsearch.yml reads as follows:
network.host: 0.0.0.0
http.cors.enabled: true # 设置跨域,主要用于head插件访问es
http.cors.allow-origin: "*"
- 7, start es cluster, three servers in order to perform docker-compose up -d
- 8, then visit http://172.168.50.41:9200/_cluster/health?pretty see if the cluster is working properly, normal operation will return the following information
{
"cluster_name" : "es-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
- 9, the use of cluster head monitor es
// 拉取镜像
docker pull mobz/elasticsearch-head:5
// 启动
docker run -d --name es_admin -p 9100:9100 mobz/elasticsearch-head:5
- 10, in the browser enter http: // IP: 9100
注意,IP地址要使用你es所在服务器的地址,然后点击连接,出现类似的显示说明,没问题,这里是三个节点,星星代表master。
如果点击连接没有反应,F12查看network是403,或者200,但是console中提示跨域,那么说明跨域设置的有问题,直接进入容器中,使用 docker exec -it 容器ID /bin/bash ,检查/usr/share/elasticsearch/config/elasticsearch.yml文件,是否有以下两行:
http.cors.enabled: true
http.cors.allow-origin: "*"
- Speaking above the top volumes, see how to mount a volume in the position of host it?
# docker volume create elk_data // 创建一个自定义容器卷
# docker volume ls // 查看所有容器卷
# docker volume inspect elk_data // 查看指定容器卷详情信息, 包括真实目录
注意: 如果你要删除一个挂载卷,或者重新生成,请执行删除卷操作
# docker volume rm edc-nginx-vol // 直接执行这个命令,同时会删除文件,但是先删除文件的话,必须执行此命令,否则可能导致该节点无法加入集群。
All of the above is completed, es three-node cluster completed structures, modify /root/elk/docker-compose.yml file server 172.168.50.41 below, just before es, will now support ELK
ELK front work, upgrading PIP, previously used to pull the mirror
yum -y install epel-release
yum -y install python-pip
// 更新pip
pip install --upgrade pip
// 下载elasticsearch,logstash,kibana, 自es5开始,一般三个软件的版本都保持一致了。
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.1.1 && docker pull docker.elastic.co/logstash/logstash:7.1.1 && docker pull docker.elastic.co/kibana/kibana:7.1.1
ELK build
- 1, the establishment of the directory where elk configuration file,
cd / root / elk
vim Docker-compose.yml // boot file elk this group as a docker-compose container - 2, docker-compose.yml document follows
version: '3'
services:
elasticsearch: # 服务名称
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.1 # 使用的镜像
container_name: elasticsearch7.1.1 # 容器名称
environment: # 环境变量
- node.name=node-41 # 节点名称,集群模式下每个节点名称唯一
- network.publish_host=172.168.50.41 # 用于集群内各机器间通信,其他机器访问本机器的es服务
- network.host=0.0.0.0 # 设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0,
- discovery.seed_hosts=172.168.50.40,172.168.50.240,172.168.50.41 # es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
- cluster.initial_master_nodes=172.168.50.40,172.168.50.240,172.168.50.41 # es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
- cluster.name=es-cluster # 集群名称,相同名称为一个集群
#- http.cors.enabled=true # 是否支持跨域,是:true,主要用于head插件访问es,这里设置不起作用,原因未知,我们会将es的配置文件映射到宿主机进行修改
#- http.cors.allow-origin="*" # 表示支持所有域名,是:true,这里设置不起作用,原因未知,我们会将es的配置文件映射到宿主机进行修改
- bootstrap.memory_lock=true # 内存交换的选项,官网建议为true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" # 设置内存大小
volumes:
- esdata:/usr/share/elasticsearch/data # 设置es数据存放的目录
- /root/elk/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml # 映射es容器中的配置文件到宿主机
hostname: elasticsearch # 服务hostname
ulimits: # 是否限制内存
memlock:
soft: -1
hard: -1
restart: always # 重启策略
ports:
- 9200:9200 # http端口
- 9300:9300 # es节点直接交互的端口,非http
kibana:
image: docker.elastic.co/kibana/kibana:7.1.1
container_name: kibana7.1.1
environment:
- elasticsearch.hosts=http://elasticsearch:9200 # 设置连接的es节点
hostname: kibana
depends_on:
- elasticsearch # 依赖es服务,会先启动es容器在启动kibana
restart: always
ports:
- 5601:5601 # 对外访问端口
logstash:
image: docker.elastic.co/logstash/logstash:7.1.1
container_name: logstash7.1.1
hostname: logstash
restart: always
depends_on:
- elasticsearch
ports:
- 9600:9600
- 5044:5044
volumes: # 顶级volumes
esdata:
driver: local # 会生成一个对应的目录和文件,如何查看,下面有说明。
- Current Contents Executive docker-compose up -d start ELK service
出现done表示成功,docker-compose logs 查看日志(分别输出elk三个服务的日志)执行docker ps -a 可以看到三个服务的运行状态
在浏览器输入http://IP:5601/ 访问kibana
- Also note es ELK server connected through the head, to see whether the normal connection, if es three nodes are normal, HTTP: // IP: 5601 can be a normal visit, then ELK Cluster Setup is completed .