docker+elk7.8实战之单机搭建es集群

配置篇

1.docker安装配置

#yum更新到最新
yum update
#安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
#设置yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#查看docker所有版本
yum list docker-ce --showduplicates | sort -r
#安装17.12.0版本
yum install docker-ce-17.12.0.ce
#启动并加入开机启动
systemctl start docker
systemctl enable docker

2.docker安装配置es前置配置

  vi /etc/sysctl.conf
  vm.max_map_count=262144
  #使配置文件生效
  sysctl -p

3.需要用到的文件夹与授权设置

# 创建大目录
mkdir -p /opt/elk7/es/es01
mkdir -p /opt/elk7/es/es02
mkdir -p /opt/elk7/es/es03

# 01创建配置文件目录/数据目录/日志目录
cd /opt/elk7/es/es01
mkdir conf data01 logs
chmod 777 -R conf
chmod 777 -R data01
chmod 777 -R logs

# 02创建配置文件目录/数据目录/日志目录
cd /opt/elk7/es/es02
mkdir conf data02 logs
chmod 777 -R conf
chmod 777 -R data02
chmod 777 -R logs

# 03创建配置文件目录/数据目录/日志目录
cd /opt/elk7/es/es03
mkdir conf data03 logs
chmod 777 -R conf
chmod 777 -R data03
chmod 777 -R logs

4.准备配置文件

/opt/elk7/es/es0x/elasticsearch.yml

# 集群名称 3个配置文件要一直
cluster.name: elk202
# 节点名称 3个配置文件要不一致 本例对应es01 es02 es03
node.name: es01
# # 是否主节点 具体配置可参考网上 本例三个节点都是主节点 也都是数据节点
node.master: true
# # 是否数据节点
node.data: true

#新版本不能在配置文件设置index开头的属性
#设置分片数 默认5
# index.number_of_shards: 5 
# 副本个数
# index.number_of_replicas: 3

# # 存储设置
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
# 设置日志级别 调试可以用trace或者debug
logger.org.elasticsearch.transport: debug
#
# # 网络设置 publish_host写具体的服务器的ip地址
network.publish_host: 172.16.10.202
# 这里配置0.0.0.0是为了外部可以通过localhost或者具体IP都可以访问
network.host: 0.0.0.0
# 集群间通信端口 三个配置文件分别是 9300 9302 9303
transport.tcp.port: 9300
# 是否压缩 这个属性后期会废弃掉
transport.tcp.compress: true
# 外界访问使用的端口 本例使用9200 9202 9203
http.port: 9200
http.max_content_length: 100mb

# 初始化数据恢复时并发恢复线程的个数 默认是4 
#cluster.routing.allocation.node_initial_primaries_recoveries: 4
#添加删除节点或负载均衡时并发恢复线程的个数 默认是4
#cluster.routing.allocation.node_concurrent_recoveries: 2
#设置数据恢复时限制的带宽 0为无限制  之前版本使用 max_size_per_sec
indices.recovery.max_bytes_per_sec: 0
#设置这个参数保证集群中的节点可以知道其他N个有master资格的节点 默认为1
discovery.zen.minimum_master_nodes: 1
#集群中自动发现其他节点时ping连接超时时间 默认为3s
discovery.zen.ping_timeout: 3s
#这个超时新版本改了 改成了上面的discovery.zen.ping_timeout
#discovery.zen.ping.timeout: 3s
#是否打开多播发现节点 默认是true
#discovery.zen.ping.multicast.enabled: false
#设置新节点启动时能够发现的主节点列表 一定要注意这里的端口是9300对应的端口 本例就是9300 9302 9303 三个配置文件一致
discovery.zen.ping.unicast.hosts: ['172.16.10.202:9300','172.16.10.202:9302','172.16.10.202:9303']
#新版本增加了初始化主节点配置 可以使用节点名称标识
cluster.initial_master_nodes: ['es01']

/opt/elk7/es/es0x/jvm.options

-Xmx2g
-Xms2g

/opt/elk7/es/es0x/log4j2.properties 参考的官网配置,目前在日志里面看到是报错的后续再处理。

appender.rolling.type = RollingFile 
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json 
appender.rolling.layout.type = ESJsonLayout 
appender.rolling.layout.type_name = server 
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz 
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy 
appender.rolling.policies.time.interval = 1 
appender.rolling.policies.time.modulate = true 
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy 
appender.rolling.policies.size.size = 256MB 
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete 
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName 
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-* 
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize 
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB

至此相关配置修改完毕。

启动与运行测试篇

1.拉取镜像

[root@kf202 conf]# docker search elasticsearch
NAME                                 DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
elasticsearch                        Elasticsearch is a powerful open source sear…   4516                [OK]                
nshou/elasticsearch-kibana           Elasticsearch-7.7.0 Kibana-7.7.0                119                                     [OK]
itzg/elasticsearch                   Provides an easily configurable Elasticsearc…   70                                      [OK]
mobz/elasticsearch-head              elasticsearch-head front-end and standalone …   65                                      
elastichq/elasticsearch-hq           Official Docker image for ElasticHQ: Elastic…   60                                      [OK]
elastic/elasticsearch                The Elasticsearch Docker image maintained by…   36                                                                   
[root@kf202 conf]# docker pull elasticsearch:7.8.1
7.8.1: Pulling from library/elasticsearch
Digest: sha256:2d9bedec9e41ab6deac52fa478ba6aae749b8e9dea4172f402013d788047368e
Status: Image is up to date for elasticsearch:7.8.1

注意: docker pull xxx:yyy 需要通过yyy指定es版本,可用版本可以在https://hub.docker.com/_/elasticsearch的Supported tags and respective Dockerfile links模块找到。

2.启动容器

docker run -p 9200:9200 -p 9300:9300 --name es-01 --restart always \
-v /opt/elk7/es/es01/data01:/usr/share/elasticsearch/data \
-v /opt/elk7/es/es01/conf:/usr/share/elasticsearch/config  \
-v /opt/elk7/es/es01/logs:/usr/share/elasticsearch/logs \
-d elasticsearch:7.8.1

docker run --name es-02 -p 9202:9202 -p 9302:9302 --restart always \
-v /opt/elk7/es/es02/data02:/usr/share/elasticsearch/data \
-v /opt/elk7/es/es02/conf:/usr/share/elasticsearch/config  \
-v /opt/elk7/es/es02/logs:/usr/share/elasticsearch/logs \
-d elasticsearch:7.8.1

docker run -p 9203:9203 -p 9303:9303 --name es-03 --restart always \
-v /opt/elk7/es/es03/data03:/usr/share/elasticsearch/data \
-v /opt/elk7/es/es03/conf:/usr/share/elasticsearch/config  \
-v /opt/elk7/es/es03/logs:/usr/share/elasticsearch/logs \
-d elasticsearch:7.8.1

3.查看运行情况

[root@kf202 conf]# docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                                NAMES
2853271dcdd6        elasticsearch:7.8.1   "/tini -- /usr/local…"   44 minutes ago      Up 44 minutes       9200/tcp, 0.0.0.0:9203->9203/tcp, 9300/tcp, 0.0.0.0:9303->9303/tcp   es-03
09e704c4c8dd        elasticsearch:7.8.1   "/tini -- /usr/local…"   44 minutes ago      Up 44 minutes       9200/tcp, 0.0.0.0:9202->9202/tcp, 9300/tcp, 0.0.0.0:9302->9302/tcp   es-02
e5742fcd401f        elasticsearch:7.8.1   "/tini -- /usr/local…"   44 minutes ago      Up 44 minutes       0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp                       es-01
b9498898048f        mysql:5.7.30          "docker-entrypoint.s…"   26 hours ago        Up 26 hours         33060/tcp, 0.0.0.0:3308->3306/tcp                                    mysql3308
31d7c8b09767        mysql:5.7.30          "docker-entrypoint.s…"   28 hours ago        Up 27 hours         33060/tcp, 0.0.0.0:3307->3306/tcp                                    mysql3307
b96fb0a12cd4        mysql:5.7.30          "docker-entrypoint.s…"   28 hours ago        Up 27 hours         0.0.0.0:3306->3306/tcp, 33060/tcp                                    mysql3306
c8daec0202a4        registry              "/entrypoint.sh /etc…"   10 months ago       Up 3 weeks                                                                               registry

可以看到es-01/es-02/es-03都正常启动了。

4.查看集群情况

[root@kf202 conf]# curl http://172.16.10.202:9200/_cluster/health?pretty
{
  "cluster_name" : "elk202",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
[root@kf202 conf]# curl http://172.16.10.202:9200
{
  "name" : "es01",
  "cluster_name" : "elk202",
  "cluster_uuid" : "aK2Cx0cPTB-7FyLA6CCIIQ",
  "version" : {
    "number" : "7.8.1",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89",
    "build_date" : "2020-07-21T16:40:44.668009Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
[root@kf202 conf]# curl http://172.16.10.202:9202
{
  "name" : "es02",
  "cluster_name" : "elk202",
  "cluster_uuid" : "aK2Cx0cPTB-7FyLA6CCIIQ",
  "version" : {
    "number" : "7.8.1",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89",
    "build_date" : "2020-07-21T16:40:44.668009Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
[root@kf202 conf]# curl http://172.16.10.202:9203
{
  "name" : "es03",
  "cluster_name" : "elk202",
  "cluster_uuid" : "aK2Cx0cPTB-7FyLA6CCIIQ",
  "version" : {
    "number" : "7.8.1",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89",
    "build_date" : "2020-07-21T16:40:44.668009Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

1)通过第一个命令可以查看集群的节点情况,可以看到有3个节点是正常的,且status是green也是没问题的。

2) 通过后面几个命令可以看到当前节点的信息,本例通过不同的端口来区分节点,可以看到对应的cluster_uuid是相同的说明集群成功。

可视化工具安装篇

1.拉取镜像

[root@kf202 conf]# docker pull mobz/elasticsearch-head:5
5: Pulling from mobz/elasticsearch-head
75a822cd7888: Downloading 
57de64c72267: Downloading 
4306be1e8943: Downloading 
871436ab7225: Downloading 
0110c26a367a: Downloading 
1f04fe713f1b: Downloading 
723bac39028e: Downloading 
7d8cb47f1c60: Downloading 
7328dcf65c42: Downloading 
b451f2ccfb9a: Downloading 
304d5c28a4cf: Downloading 
4cf804850db1: Downloading 
5: Pulling from mobz/elasticsearch-head
75a822cd7888: Pull complete 
57de64c72267: Pull complete 
4306be1e8943: Pull complete 
871436ab7225: Pull complete 
0110c26a367a: Pull complete 
1f04fe713f1b: Pull complete 
723bac39028e: Pull complete 
7d8cb47f1c60: Pull complete 
7328dcf65c42: Pull complete 
b451f2ccfb9a: Pull complete 
304d5c28a4cf: Pull complete 
4cf804850db1: Pull complete 
Digest: sha256:55a3c82dd4ba776e304b09308411edd85de0dc9719f9d97a2f33baa320223f34
Status: Downloaded newer image for mobz/elasticsearch-head:5

注意:

使用docker pull拉取镜像的时候可以会比较慢或者拉不下来,可以参考网上方案(如https://www.cnblogs.com/jakaBlog/p/11756015.html)替换当前数据源

2.启动容器

[root@kf202 conf]# docker run -it --name elasticsearch-head --restart always -p 9100:9100 -d mobz/elasticsearch-head:5
ee42ca5cb48cfaab82573387c3fb2d5077529b67ef2a7faba9012b87aa8517e5
[root@kf202 conf]# docker ps|grep head
ee42ca5cb48c        mobz/elasticsearch-head:5   "/bin/sh -c 'grunt s…"   2 minutes ago       Up 2 minutes        0.0.0.0:9100->9100/tcp                                               elasticsearch-head

可以看到是正常启动了的

3.浏览器访问验证

es-head截图
可以看到总共三个节点,目前es03是主节点(前面有五角星标记),集群状态是绿色的。

注意: 浏览器访问的时候会有跨域问题,需要在elasticsearch.yml中添加如下跨域配置,只需要访问elasticsearch-head的ip对应的配置文件上加即可,也可以全加。加给了哪个ip就可以用哪个ip:9100访问elasticsearch-head

#配置跨域
http.cors.enabled: true
http.cors.allow-origin: '*'

到此单机版es集群搭建完毕。

猜你喜欢

转载自blog.csdn.net/u010361276/article/details/107688127