centos7.5环境下elasticserch5.6.15集群升级6.8.4

节点的角色分片:
node01 eus_mp_web01     : master,false node,false, ingest,true
node02 eus_mp_es01     : master,true node,true, ingest,true
node03 eus_mp_spider01  : master,false node,true, ingest,true 


背景:
    生产环境大量使用 elasticsearch 集群,不同的业务使用不同版本的elasticsearch
    es经常曝出一些大的漏洞,需要进行版本升级,并且使用x-pack免费版的基本验证功能,避免用户数据泄露


x-pack免费版本特征:

基本的TLS 功能,可对通信进行加密
文件和原生 Realm,可用于创建和管理用户
基于角色的访问控制,可用于控制用户对集群 API 和索引的访问权限;
通过针对 Kibana Spaces 的安全功能,还可允许在 Kibana 中实现多租户。


升级的两种策略:
1.滚动性升级,即不中断业务服务,一台一台进行升级
2.全新部署新版本,然后将数据迁移到新版本的es集群中

此处从 es5.6.15升级到6.8.4,可以直接配置好新的集群后(新集群的data和logs直接引用老集群的配置),关闭老的集群,重新启动新集群等待恢复即可

这两种方式都需要将数据恢复到新版本的es集群中,最好先进行快照备份,避免升级失败后可以恢复数据


1.升级前先备份低版本的elasticserch的数据:快照方式

原理:即将老版本的es数据打个快照备份出来写入到 /data/esback 目录中并进行nfs挂载,新、旧两个es集群的配置文件中都引用配置 path.repo: ["/data/esback/"],
这样新的集群也能对这个目录进行操作了,等待新集群搭建好后,直接把 /data/esback 目录中的文件恢复到新集群的 索引 index 中即可

使用Mount nfs进行挂载共享(所有的es集群节点都可以访问):

目标:将本地es备份出来的数据目录/data/esback 目录挂载到nfs的共享目录 /data/es_snapshot,这样恢复的时候就都可以访问这个共享目录进行恢复了

# 注意nfs共享目录最好用/data 不用opt,因为 /opt 默认是使用 根路径,可能会导致磁盘被占满


// 在172.16.0.230 nfs服务端上创建共享目录
创建共享目录,即作为nfs的共享目录
mkdir /data/es_snapshot

# 在集群所有节点中创建 /data/esback 目录,即将es数据备份出来的目标目录
创建本地备份出来的目录
/data/esback

# 将其中一台es客户端作为nfs服务端
#nfs服务端的操作
# vim /etc/exports
/data/es_snapshot *(insecure,rw,no_root_squash,sync,anonuid=1018,anongid=1018)

# 注意此处的anonuid和gid要和运行es程序的用户保持一致,如果不一致,按照如下步骤处理

# 添加指定 uid 和 gid 的用户
groupadd -g 1000 elastic
useradd -u 1000 -g elastic elastic

# 停用elasticsearch程序

# 修改 gid和 uid为500 命令示例:
usermod -u 1018 elasticsearch
groupmod -g 1018 elasticsearch

# 加入备份目录配置
# vim config/elasticsearch.yml

path.repo: ["/data/esback"]

# 重新授权

chown -R elasticsearch.elasticsearch /data/esback
chown -R elasticsearch.elasticsearch /data/es_snapshot
chown -R elasticsearch.elasticsearch /data/es
chown -R elasticsearch.elasticsearch /opt/elasticsearch-5.6.15

# 再次启动elasticsearch

# 等待集群恢复,然后备份

curl 172.16.0.230:9200/_cat/indices


// 查看共享文件夹
yum install -y exportfs
exportfs -rv

// nfs服务端修改nfs配置
 vim /etc/sysconfig/nfs
修改如下:
RPCNFSDARGS="-N 2 -N 3"
        ----->启用
# Turn off v4 protocol support
RPCNFSDARGS="-N 4"     ---->启用
重启生效
systemctl restart nfs

// 客户端操作
yum install -y nfs-utils
  

// 重启启动新集群机器的NFS服务  
systemctl restart nfs

//每一台es节点服务器上进行Mount挂载
mount -t nfs 172.16.0.230:/data/es_snapshot /data/esback -o proto=tcp -o nolock

# 取消nfs挂载命令

umount /opt/esback

列出nfs服务端共享的目录:
[root@eus_mp_web01:/opt/elasticsearch-5.6.15]# showmount -e 172.16.0.230
Export list for 172.16.0.230:
/data/es_snapshot *


[root@eus_mp_web01:/opt/elasticsearch-5.6.15]# df -Th
Filesystem                     Type      Size  Used Avail Use% Mounted on
/dev/vda1                      ext4       40G   21G   17G  55% /
devtmpfs                       devtmpfs  7.8G     0  7.8G   0% /dev
tmpfs                          tmpfs     7.8G   45M  7.8G   1% /dev/shm
tmpfs                          tmpfs     7.8G  1.1M  7.8G   1% /run
tmpfs                          tmpfs     7.8G     0  7.8G   0% /sys/fs/cgroup
tmpfs                          tmpfs     1.6G     0  1.6G   0% /run/user/0
/dev/mapper/myvg-mylv          ext4      493G  265G  207G  57% /data
tmpfs                          tmpfs     1.6G     0  1.6G   0% /run/user/1002
172.16.0.230:/data/es_snapshot nfs       493G   98G  370G  21% /data/esback


// 在旧机器上将共享目录的权限付给ES的运行用户  
chown elastic:elastic -R /data/esback

2.创建ES仓库my_backup

修改配置文件:
vim elasticsearch.yml
# 添加如下配置(需要在旧集群的每个节点上添加),重新启动集群
path.repo: ["/data/esback"]


创建快照仓库 my_backup 命令:
curl -H "Content-Type: application/json" -v -XPUT http://172.16.0.230:9200/_snapshot/my_backup -d '
{
    "type": "fs",
    "settings": {
        "location": "/data/esback",
    "compress": true
    }
}
'
# 返回值
{"acknowledged":true}


# 实际执行的返回
[root@eus_mp_es01:/opt/elasticsearch-5.6.15]# curl -H "Content-Type: application/json" -v -XPUT http://172.16.0.230:9200/_snapshot/my_backup -d '
> {
>     "type": "fs",
>     "settings": {
>         "location": "/data/esback",
> "compress": true
>     }
> }
> '
* Hostname was NOT found in DNS cache
*   Trying 172.16.0.230...
* Connected to 172.16.0.230 (172.16.0.230) port 9200 (#0)
> PUT /_snapshot/my_backup HTTP/1.1
> User-Agent: curl/7.36.0
> Host: 172.16.0.230:9200
> Accept: */*
> Content-Type: application/json
> Content-Length: 100
> 
* upload completely sent off: 100 out of 100 bytes
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 21
< 
* Connection #0 to host 172.16.0.230 left intact
{"acknowledged":true}


# 报错的处理
'RemoteTransportException[[ictr_node1][10.10.18.93:9300][internal:admin/repository/verify]]

# 权限不够
chown -R es.es /opt/es_snapshot/
chown -R es.es /opt/esback/

# 删除正在进行的快照

curl -H "Content-Type: application/json" -v -XDELETE http://172.16.0.230:9200/_snapshot/my_backup/snapshot20191113

# 获取快照信息
curl -H "Content-Type: application/json" -v -XGET http://172.16.0.230:9200/_snapshot/my_backup/snapshot20191113

# 创建所有索引的备份

# curl -H "Content-Type: application/json" -v -XPUT http://172.16.0.230:9200/_snapshot/my_backup/snapshot20191113
{"accepted":true}


查看备份

[root@eus_mp_es01:/opt/elasticsearch-5.6.15]#  curl -XGET http://172.16.0.230:9200/_snapshot/my_backup/snapshot20191113?pretty
{
  "snapshots" : [
    {
      "snapshot" : "snapshot20191113",
      "uuid" : "ggiQeJdnT-CvC9OXxnPGgw",
      "version_id" : 5061599,
      "version" : "5.6.15",
      "indices" : [
        "channel",
        "channel_rel",
        ".kibana",
        "influecer",
        "channel_list",
        "video"
      ],
      "state" : "IN_PROGRESS",
      "start_time" : "2019-11-13T03:48:23.245Z",
      "start_time_in_millis" : 1573616903245,
      "end_time" : "1970-01-01T00:00:00.000Z",
      "end_time_in_millis" : 0,
      "duration_in_millis" : -1573616903245,
      "failures" : [ ],
      "shards" : {
        "total" : 0,
        "failed" : 0,
        "successful" : 0
      }
    }
  ]
}


升级方式1:滚动升级 elasticsearch5.6.16 --> elasticsearch6.8.4

1.备份数据,避免出现问题后回滚
2.先升级到新版本,然后安装x-pack,此时再要求开发同事修改代码适配
a.先下载新版本的6.8.4


①关闭自动分片
curl -v -XPUT http://172.16.0.230:9200/_cluster/settings -d '{
  "persistent": {
    "cluster.routing.allocation.enable": "none"
  }
}'

[root@eus_mp_es01:/data]# curl -v -XPUT http://172.16.0.230:9200/_cluster/settings -d '{
>   "persistent": {
>     "cluster.routing.allocation.enable": "none"
>   }
> }'
* Hostname was NOT found in DNS cache
*   Trying 172.16.0.230...
* Connected to 172.16.0.230 (172.16.0.230) port 9200 (#0)
> PUT /_cluster/settings HTTP/1.1
> User-Agent: curl/7.36.0
> Host: 172.16.0.230:9200
> Accept: */*
> Content-Length: 73
> Content-Type: application/x-www-form-urlencoded
> 
* upload completely sent off: 73 out of 73 bytes
< HTTP/1.1 200 OK
< Warning: 299 Elasticsearch-5.6.15-fe7575a "Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header." "Wed, 13 Nov 2019 07:29:24 GMT"
< content-type: application/json; charset=UTF-8
< content-length: 106
< 
* Connection #0 to host 172.16.0.230 left intact
{"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"none"}}}},"transient":{}}


②暂时禁用非必要的索引并执行同步刷新

curl -XPOST http://172.16.0.230:9200/_flush/synced

[root@eus_mp_es01:/data]# curl -XPOST http://172.16.0.230:9200/_flush/synced
{"_shards":{"total":22,"successful":22,"failed":0},"channel_rel":{"total":4,"successful":4,"failed":0},".kibana":{"total":2,"successful":2,"failed":0},"video":{"total":4,"successful":4,"failed":0},"influecer":{"total":6,"successful":6,"failed":0},"channel_list":{"total":6,"successful":6,"failed":0}}

注意: 如果是从6.3之前的版本升级上来的,需要注意提前要移除X-Pack插件,然后再去升级版本。执行bin/elasticsearch-plugin remove x-pack


a. 备份原来的elasticsearch目录,然后解压新版的elasticsearch。
b. 如果使用外部的配置路径,配置ES_PATH_CONF环境变量到那个位置。如果没有的话,拷贝老的配置目录过来新的elasticsearch目录就可以了。
c. 检查path.data是否指向正确的数据目录
d. 检查path.log是否指向正确的日志目录

新集群的配置文件

node01

[root@eus_mp_web01:/opt/elasticsearch-6.8.4]# cat config/elasticsearch.yml
# 节点名
cluster.name: influenex-es
# 集群的名称,可以不写
discovery.zen.ping.unicast.hosts: ["eus_mp_web01","eus_mp_es01","eus_mp_spider01"]
node.name: eus_mp_web01
node.master: false
node.data: false
path.data: /data/es/data
path.logs: /data/es/logs
#action.auto_create_index: false
indices.fielddata.cache.size: 1g
bootstrap.memory_lock: false
# 内网地址,可以加快速度
network.host: 172.16.0.231
http.port: 9200

# 增加新的参数head插件可以访问es
http.cors.enabled: true
http.cors.allow-origin: "*"

gateway.recover_after_time: 8m

# 以下配置可以减少当es节点短时间宕机或重启时shards重新分布带来的磁盘io读写浪费
discovery.zen.fd.ping_timeout: 300s
discovery.zen.fd.ping_retries: 8
discovery.zen.fd.ping_interval: 30s
discovery.zen.ping_timeout: 180s

indices.query.bool.max_clause_count: 10240
path.repo: ["/data/esback"]

transport.tcp.port: 9300
discovery.zen.minimum_master_nodes: 1

# 启用安全认证
xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12 
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12


node02

[root@eus_mp_es01:/opt/elasticsearch-6.8.4/plugins]# cat ../config/elasticsearch.yml
# 节点名
cluster.name: influenex-es
# 集群的名称,可以不写
discovery.zen.ping.unicast.hosts: ["eus_mp_web01","eus_mp_es01","eus_mp_spider01"]
node.name: eus_mp_es01
node.master: true
node.data: true
path.data: /data/es/data
path.logs: /data/es/logs
#action.auto_create_index: false
indices.fielddata.cache.size: 1g
# 内网地址,可以加快速度
#network.host: 192.168.254.37
network.host: 172.16.0.230
http.port: 9200
# 增加新的参数head插件可以访问es
http.cors.enabled: true
http.cors.allow-origin: "*"

gateway.recover_after_time: 8m

# 以下配置可以减少当es节点短时间宕机或重启时shards重新分布带来的磁盘io读写浪费
discovery.zen.fd.ping_timeout: 300s
discovery.zen.fd.ping_retries: 8
discovery.zen.fd.ping_interval: 30s
discovery.zen.ping_timeout: 180s

indices.query.bool.max_clause_count: 10240
path.repo: ["/data/esback"]

transport.tcp.port: 9300
discovery.zen.minimum_master_nodes: 1

# 启用安全认证
xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12 
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

node03
[root@eus_mp_spider01:/opt/elasticsearch-6.8.4]# cat config/elasticsearch.yml
# 节点名
cluster.name: influenex-es
# 集群的名称,可以不写
discovery.zen.ping.unicast.hosts: ["eus_mp_web01","eus_mp_es01","eus_mp_spider01"]
node.name: eus_mp_spider01
node.master: false
node.data: true
path.data: /data/es/data
path.logs: /data/es/logs
#action.auto_create_index: false
indices.fielddata.cache.size: 1g
bootstrap.memory_lock: false
# 内网地址,可以加快速度
network.host: 172.16.0.232
http.port: 9200
# 增加新的参数head插件可以访问es
http.cors.enabled: true
http.cors.allow-origin: "*"

gateway.recover_after_time: 8m

# 以下配置可以减少当es节点短时间宕机或重启时shards重新分布带来的磁盘io读写浪费
discovery.zen.fd.ping_timeout: 300s
discovery.zen.fd.ping_retries: 8
discovery.zen.fd.ping_interval: 30s
discovery.zen.ping_timeout: 180s

indices.query.bool.max_clause_count: 10240

path.repo: ["/data/esback"]

transport.tcp.port: 9300
discovery.zen.minimum_master_nodes: 1

# 启用安全认证
xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12 
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

③关闭节点
④重新启动节点,注意要切换到es用户,不能使用root用户
chown -R es.es elasticsearch-6.8.4

[es@sz_kp_wanghong_dev01_18_92:/opt/es-node/elasticsearch-6.8.4]$ bin/elasticsearch -d


在其他节点重复以上过程


启动升级后的节点,并通过查看日志和使用下面命令来检查节点是否正确加入到集群
[root@sz_kp_wanghong_dev01_18_92:/opt/es-node/elasticsearch-5.6.15]# curl http://172.16.0.230:9200/_cat/nodes
10.10.18.93 16 98 56 1.22 0.50 0.29 di  - ictr_node1
10.10.18.92 16 88  8 0.08 0.26 0.31 mdi * ictr_node2
[root@sz_kp_wanghong_dev01_18_92:/opt/es-node/elasticsearch-5.6.15]# curl http://10.10.18.92:9200/_cat/indices
yellow open channel      vRFQoIhmT8WmSbDCfph0ag 3 1   53374      0  44.2mb  44.2mb
yellow open channel_rel  ZeeBbkogT5KtxzziUYtu_Q 2 1  459528      0 168.8mb 168.8mb
yellow open channel_list 1dk8uH8bTeikez0lFR2mJg 3 1 5509390  78630     7gb     7gb
yellow open video        HNhyt9ioSEayAotGVXRCVg 2 1  798369 228155   1.6gb   1.6gb
yellow open .kibana      lY82G_-XSniyd_bnMOLuQg 1 1      15      1 146.3kb 146.3kb
yellow open influecer    RQtQWXKIRE2UYyZlCvv7bA 3 1  148526  48641 272.8mb 272.8mb


# 注意:节点加入集群后,删除cluster.routing.allocation.enable设置以启用分片分配并开始使用节点,这个一定要重新启用,否则集群一直会yellow


curl -H "Content-Type: application/json" -v -XPUT http://172.16.0.230:9200/_cluster/settings -d '{
  "persistent": {
    "cluster.routing.allocation.enable": "all"
  }
}'



curl -H "Content-Type: application/json" -v -XGET http://172.16.0.230:9200/_cluster/settings -d '{
  "persistent": {
    "cluster.routing.allocation.enable": "all"
  }
}'

重新打开分片报错:
[root@sz_kp_wanghong_dev01_18_92:/opt/es-node/elasticsearch-5.6.15]# curl -v -XPUT http://10.10.18.92:9200/_cluster/settings -d '{
>   "persistent": {
>     "cluster.routing.allocation.enable": "true"
>   }
> }'

* Hostname was NOT found in DNS cache
*   Trying 10.10.18.92...
* Connected to 10.10.18.92 (10.10.18.92) port 9200 (#0)
> PUT /_cluster/settings HTTP/1.1
> User-Agent: curl/7.36.0
> Host: 10.10.18.92:9200
> Accept: */*
> Content-Length: 73
> Content-Type: application/x-www-form-urlencoded
> 
* upload completely sent off: 73 out of 73 bytes
< HTTP/1.1 406 Not Acceptable
< content-type: application/json; charset=UTF-8
< content-length: 97
< 
* Connection #0 to host 10.10.18.92 left intact
{"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}


[root@sz_kp_wanghong_dev01_18_92:/opt/es-node/elasticsearch-5.6.15]# curl http://10.10.18.92:9200/_cluster/health?pretty
{
  "cluster_name" : "kp-dev-application",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 14,
  "active_shards" : 28,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}


安装新版本中文分词插件

https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.8.4/elasticsearch-analysis-ik-6.8.4.zip

# 解压在plugin目录重新启动elasticsearch即可
cd /opt/es-node/elasticsearch-6.8.4/plugins
unzip -d elasticsearch-analysis-ik elasticsearch-analysis-ik-6.8.4.zip

3.启用x-pack的密码验证

# 生成证书

[root@sz_kp_wanghong_dev01_18_92:/opt/es-node/elasticsearch-6.8.4]# bin/elasticsearch-certutil ca
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]: 
Enter password for elastic-stack-ca.p12 : 
[root@sz_kp_wanghong_dev01_18_92:/opt/es-node/elasticsearch-6.8.4]# ls
bin  config  elastic-stack-ca.p12  lib  LICENSE.txt  logs  modules  NOTICE.txt  plugins  README.textile
[root@sz_kp_wanghong_dev01_18_92:/opt/es-node/elasticsearch-6.8.4]# bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires an SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA).
    * The tool can automatically generate a new CA for you, or you can provide your own with the
         -ca or -ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you specify any of the following options:
    * -pem (PEM formatted output)
    * -keep-ca-key (retain generated CA key)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files

Enter password for CA (elastic-stack-ca.p12) : 
Please enter the desired output file [elastic-certificates.p12]: 
Enter password for elastic-certificates.p12 : 

Certificates written to /opt/es-node/elasticsearch-6.8.4/elastic-certificates.p12

This file should be properly secured as it contains the private key for 
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.


# 修改config/elasticsearch.yml配置

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/local/elasticsearch/config/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/local/elasticsearch/config/elastic-certificates.p12

# 配置密码
[elasticsearch@eus_mp_es01:/opt/elasticsearch-6.8.4]$ bin/elasticsearch-setup-passwords interactive --verbose
Running with configuration path: /opt/elasticsearch-6.8.4/config

Testing if bootstrap password is valid for http://172.16.0.230:9200/_xpack/security/_authenticate?pretty
{
  "username" : "elastic",
  "roles" : [
    "superuser"
  ],
  "full_name" : null,
  "email" : null,
  "metadata" : {
    "_reserved" : true
  },
  "enabled" : true,
  "authentication_realm" : {
    "name" : "reserved",
    "type" : "reserved"
  },
  "lookup_realm" : {
    "name" : "reserved",
    "type" : "reserved"
  }
}


# 集群的状态要特别注意,这里的状态因为集群过大可能一直是red,150G左右的集群等待大概1小时才变成green,active_shards_percent_as_number 参数一直为0,所以要特别注意,如果没有报错,不要轻易重新启动集群否则就又要开始发现集群了
Checking cluster health: http://172.16.0.230:9200/_cluster/health?pretty
{
  "cluster_name" : "influenex-es",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 12,
  "active_shards" : 24,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana]: 
Reenter password for [kibana]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 

Trying user password change call http://172.16.0.230:9200/_xpack/security/user/apm_system/_password?pretty
{ }

Changed password for user [apm_system]

Trying user password change call http://172.16.0.230:9200/_xpack/security/user/kibana/_password?pretty
{ }

Changed password for user [kibana]

Trying user password change call http://172.16.0.230:9200/_xpack/security/user/logstash_system/_password?pretty
{ }

Changed password for user [logstash_system]

Trying user password change call http://172.16.0.230:9200/_xpack/security/user/beats_system/_password?pretty
{ }

Changed password for user [beats_system]

Trying user password change call http://172.16.0.230:9200/_xpack/security/user/remote_monitoring_user/_password?pretty
{ }

Changed password for user [remote_monitoring_user]

Trying user password change call http://172.16.0.230:9200/_xpack/security/user/elastic/_password?pretty
{ }

Changed password for user [elastic]


[es@sz_kp_wanghong_dev01_18_92:/opt/es-node/elasticsearch-6.8.4]$ curl --user elastic:pass -XGET 'http://10.10.18.92:9200/_cat/indices'
green open channel_rel  ZeeBbkogT5KtxzziUYtu_Q 2 1  459528      0 337.7mb 168.8mb
green open .security-6  iQHndFBqRe2Ss2o7KMxyFg 1 1       6      0  38.3kb  19.1kb
green open .kibana      lY82G_-XSniyd_bnMOLuQg 1 1      15      1 292.6kb 146.3kb
green open influecer    RQtQWXKIRE2UYyZlCvv7bA 3 1  148526  48641 545.6mb 272.8mb
green open channel      vRFQoIhmT8WmSbDCfph0ag 3 1   53374      0  88.4mb  44.2mb
green open channel_list 1dk8uH8bTeikez0lFR2mJg 3 1 5522172  78630    14gb     7gb
green open video        HNhyt9ioSEayAotGVXRCVg 2 1  798369 228155   3.3gb   1.6gb

猜你喜欢

转载自www.cnblogs.com/reblue520/p/11859254.html
今日推荐