Elasticsearch 5.6.5 docker deployment

Get into the habit of writing together! This is the 8th day of my participation in the "Nuggets Daily New Plan · April Update Challenge", click to view the details of the event .

Modify system configuration

  1. Modify the limit.conf configuration: /etc/security/limits.conf
   新增或修改成
    soft nofile 65536 # 大于等于65536
    hard nofile 65536 # 大于等于65536
    soft nproc 2048 # 大于等于2048
    hard nproc 4096 # 大于等于4096
    soft memlock unlimited
    hard memlock unlimited
复制代码
复制代码
  1. Modify the sysctl.conf configuration: /etc/sysctl.conf
   ## 新增或修改成,运行sysctl -p 生效
   vm.max_map_count=262144 # 大于等于262144
复制代码
复制代码
  1. Modify 90-nproc.conf configuration: /etc/security/limits.d/90-nproc.conf
   ## 新增或修改成
   * soft nproc 2048 # 大于等于2048
复制代码
复制代码

Run Elasticsearch

Pull the image:docker pull elasticsearch:5.6.5

Run the startup command:

docker run --name=es-1 --net=host --restart on-failure:10 --log-opt max-size=128m --ulimit nofile=65536:65536 --ulimit nproc=4096:4096 --ulimit memlock=-1:-1 -m 9000m \
        -v "$PWD/elasticsearch/esdata":/usr/share/elasticsearch/data \
        -v "$PWD/elasticsearch/logs":/usr/share/elasticsearch/logs \
        -e ES_JAVA_OPTS="-Xms4000m -Xmx4000m -Xss16m" \
        -d elasticsearch:5.6.5 \
        -Ediscovery.zen.minimum_master_nodes=1 \
        -Ediscovery.zen.ping.unicast.hosts=ip:9300,ip:9300 \
        -Ecluster.name=es-log \ ## 集群的名称
        -Enetwork.host=内网IP
复制代码
复制代码

Note: When starting here, the memory that can be used by the docker container is limited to 800m, the jvm heap size, the minimum number of master nodes in the cluster, the IP of the cluster node, and the cluster name. These need to be set according to the actual situation and are very critical.

Node name, no additional settings are required, the default is the machine name of the current host

Verify Elasticsearch Cluster

  1. Run the following command on any cluster server to curl 内网IP:9200/_cat/nodes?vreturn the information of all nodes in the current cluster, indicating that the cluster has been built
   ip        heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
   ip1           32          93   1    0.66    0.48     0.45 mdi       *      node2
   ip2           68          92   3    0.72    0.65     0.61 mdi       -      node1
   ip3           35          92   1    0.77    0.72     0.53 mdi       -      node3
复制代码
复制代码
  1. Check the health status of the cluster curl -XGET 内网IP:9200/_cluster/health?prettyreturns similar key information
   {
     "cluster_name" : "es-log",
     "status" : "green",
     "timed_out" : false,
     "number_of_nodes" : 3,
     "number_of_data_nodes" : 3,
     "active_primary_shards" : 10,
     "active_shards" : 20,
     "relocating_shards" : 0,
     "initializing_shards" : 0,
     "unassigned_shards" : 0,
     "delayed_unassigned_shards" : 0,
     "number_of_pending_tasks" : 0,
     "number_of_in_flight_fetch" : 0,
     "task_max_waiting_in_queue_millis" : 0,
     "active_shards_percent_as_number" : 100.0
   }
复制代码
复制代码

The most important piece of response information is the status field. Status may be one of three values:

  • green All primary and replica shards are allocated. Your cluster is 100% available.
  • yellow All primary shards are sharded, but at least one replica is missing. No data is lost, so search results are still intact. However, your high availability is somewhat weakened. If more shards disappear, you lose data. Think of yellow as a warning that needs to be investigated promptly.
  • red At least one primary shard (and all its replicas) is missing. This means that you are missing data: searches can only return partial data, and write requests allocated to this shard will return an exception.

Cluster creation template

After the cluster starts successfully, run the following command on any Elasticsearch node to create a template.

curl -XPUT '内网IP:9200/_template/test_template' -H 'Content-Type: application/json' -d'
{
	"template": "ctu-*",
	"settings": {
		"index": {
			"refresh_interval": "120s",
			"number_of_shards": "5",
			"max_result_window": "50000",
			"number_of_replicas": "1"
		}
	},
	"mappings": {
		"engineLog": {
			"dynamic_templates": [{
				"strings": {
					"match_mapping_type": "string",
					"mapping": {
						"type": "keyword"
					}
				}
			}]
		}
	},
	"aliases": {
		"es-log": {}
	}
}
'

复制代码
复制代码

In the template, it max_result_windowrepresents the largest item that can be returned when es query. When setting, it should be noted that the actual memory situation should not be too large
. Check the setting information of an index, and executecurl ip:9200/indexName/_settings?pretty


possible problems

  1. After modifying the limit.conf parameter, you can pass the ulimit -acheck, if it does not take effect, open a new session window, or restart the sshdservice ( service sshd restart)
  2. After modifying limit.conf and taking effect, starting the elasticsearch container still fails, the container log shows max file descriptors[4096] for elasticsearch process is to low , increase to at least[65536], try to restart the docker service
  3. If the above two methods do not work, you can try adding parameters when starting the elasticsearch container--ulimit nofile=65536:65536 --ulimit nproc=2048:4096 --ulimit memlock=unlimited:unlimited

Guess you like

Origin juejin.im/post/7084964583013613581