2 settings -- 6 update es

Be sure to back up your data before updating. Potential problems can be detected using some plugins.
Rolling upgrades
of es clusters allow one-by-one upgrades without affecting users. Multiple versions of es do not support this situation, because the data shards of the new version cannot be recognized by the old version.
The first step is to prohibit the allocation of data shards.
When you shut down a node, the allocation node will wait for one minute before copying the data of this node to the cluster. This creates a lot of useless io. If you want to avoid this, execute before shutting down the node
PUT _cluster/settings
{
  "transient": {
    "cluster.routing.allocation.enable": "none"
  }
}

The second step is to stop unimportant indexes and perform a sync (optional)
Maybe everyone would like to have data indexing while upgrading (works fine), but it would be faster if it could work temporarily and then send a sync request.
POST _flush/synced

If there is new data to change, the request will fail, and multiple requests can be made to ensure success.
Step 3
Stop one of the nodes and then proceed with the upgrade. It is recommended here that the four folders of config, data, log, and plugins are stored in different places, so that they can be avoided during the upgrade. Step
4
: Update the plug-in
Use
elasticsearch-plugin
The script installs the correct version of the plug-in. Step
5.
Start the updated node.
Start node and confirm that it has joined the cluster. (View according to the log mode, or view
GET _cat/nodes
)
Step
6 : Restart shard allocation
Once the cluster is successfully joined, perform the following operations to enable data allocation
PUT _cluster/settings
{
  "transient": {
    "cluster.routing.allocation.enable": "all"
  }
}

Step 7
Wait for node to recover and
confirm that the data synchronization is complete
GET _cat/health?v
, and then update the next node to
see that the status changes from yellow to green. It should be noted here that if the version of the master node is newer, he will not have data backup, because the new version of es may have an updated data format. In this case, the init field and the replo column are both 0. When the next node is updated, the status should change to green.
Nodes take some time to recover
GET _cat/recovery
to see the recovery.
Step 8 If the
above is successful, repeat the above operations.

Cluster restart and upgrade
When es has a large version upgrade, only the entire cluster can be restarted.

The first step is
to prohibit data allocation of data shards
PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.enable": "none"
  }
}

The second step is
to perform synchronization
POST _flush/synced

Step 3
Stop es and update all nodes
Step 4
Update plugins
Use
elasticsearch-plugin
The script updates all plugins Step
5
Start the cluster
GET _cat/health

GET _cat/nodes
Check the cluster status Step
6
When all nodes have joined the cluster, you can pass
_cat/health
View the data fragmentation status of the current section. When it is displayed in yellow (yellow), it means that all master nodes are ready to be ok.
Step 7
Start synchronous allocation of data fragments
PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.enable": "all"
  }
}
At this time, you can view the data synchronization status
GET _cat/health
GET _cat/recovery

Success once it is green in the health state

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326704543&siteId=291194637