elasticsearch configuration file parsing


1.Cluster (cluster)
# cluster name identifies your cluster, automatic probe will use it. The default value is elasticsearch
# If you run multiple clusters on the same network, then make sure your cluster name is unique.
#
# Cluster.name: My-the Application

2.Node (node)
# node name will be generated automatically at boot time, so you do not have to manually configure. You can also specify a given node
# specific name
#
# node.name: "Franz Kafka used to live"

Node is not a master node, just used to store data. The cluster node will be your "load" on
a node becomes the master node, and do not have to store any data, and has idle resources. This will be your node in the cluster "coordinator"
this node becomes the master node does not want it to become a data node, just want to let it become a "search load balancer"
# the Allow the this the Node to the Node BE eligible AS A Master (Enabled by default):
# allow this node to be elected as a master node (enabled by default)
# node.master: to true

Store the Allow the this to the Node # the Data (Enabled by default):
# allow this node to store data (enabled by default)
# node.data: to true

# Default, the plurality of nodes allowed to start from a single system. To disable this feature, the configuration shown in the following:
# node.max_local_storage_nodes:. 1

3.Index (index)
number of fragments in an index set # (default. 5)
# index.number_of_shards:. 5

the number of copies of a set index # (default. 1)
# index.number_of_replicas:. 1

4.Paths (path)
# path contains configuration (the file and logging.yml) directory
# path.conf: / path / to / conf

path to the directory index data stored in the node # (which may comprise more than one free positions, the preferred position of the large remaining space)
# Path.Data: / path / to / Data
# Path.Data: / path / to / DATAl, / path / to / DATA2

Path # temporary file
# path.work: / path / to / work

Log file path #
# path.logs: / path / to / logs

# Plug-in installation path
# path.plugins: / path / to / plugins

5.plugin (plug)
# If the current node is not installed plug listed below, does not start node
# plugin.mandatory: mapper-attachments, lang -groovy

6.Memory (memory)
# set this property to true to lock the memory
# bootstrap.mlockall: true

7.Network (network)
# ElasticSearch default address binding themselves and 0.0.0.0, HTTP transport listening port [9200-9300], between the node
port # communications [9300-9400]. (Range mean that if a port is already occupied, it will automatically try the next port)
# set a specific binding address (IPv4 or IPv6):
# network.bind_host: 192.168.0.1

# Set the address of other nodes for communication with the node. If not set, it will get automatically.
# Network.publish_host: 192.168.0.1

Both-the Set # 'bind_host' and 'publish_host':
# 'bind_host' and 'publish_host' are set to
# network.host: 192.168.0.1

# Set the communication between the nodes of a custom port (default is 9300)
# transport.tcp.port: 9300

# To enable compression communication between all nodes (disabled by default)
# transport.tcp.compress: to true

# Set a custom monitor HTTP transport port
# http.port: 9200

# Set a custom length allowed content
# http.max_content_length: 100mb

# Disable HTTP
# http.enabled: false

8.Gateway (gateway)
# default gateway type is "local" gateway (recommended)
# gateway.type: local

After a cluster # N nodes start, allowed recovery treatment
# gateway.recover_after_nodes: 1

# Set the recovery timeout initialization process, after the timeout time from one configuration to start counting nodes N
# gateway.recover_after_time: 5m

# Set the desired number of cluster nodes. Once the N nodes start (and also in line with recover_after_nodes),
# immediately begin the recovery process (do not wait recover_after_time timeout)
# gateway.expected_nodes: 2

 

Reference document: http: //www.linuxidc.com/Linux/2015-02/114244.htm

Guess you like

Origin www.cnblogs.com/gavinYang/p/11200218.html