Elasticsearch of performance optimization

  elasticsearch use sometimes appear CPU soar, memory full situation, the need for additional optimization.

 

1. must use id es own production strategy

 

2. Set the number of copies to 0, write complete resume

put  localhost:9200/_settings
 
{"number_of_replicas":0}

 

3. Close refresh, after programming can be opened

put  localhost:9200/_settings
 
{ "refresh_interval" : "-1"}

 

4. The asynchronous write hard drive, increasing the speed of writing

put  localhost:9200/_settings
 
{"translog.durability": "async"}



"index.translog.durability": "async",
"index.translog.sync_interval": "30s"

 

The test pieces each time the number of bulk data, the number of increase or decrease gradually find the optimum number.

 

6. increase the profile configuration items

indices.memory.index_buffer_size: 20 % 
indices.memory.min_index_buffer_size: 96MB 



have good documentation Index will first be stored in the cache memory, waiting to be written in the segment (segment). When the cache is full trigger brush disc section (eating I / O operations and cpu). Default minimum cache size is 48m, not enough, a maximum of 10% of the heap memory. For a large number of written scenes also seemed a bit small.

 

7. The fault detection is provided between the node configuration, for example, the following configuration elasticsearch.yml

Fault detection configuration is provided between the nodes, for example, arranged elasticsearch.yml 


scenario written in large quantities, it will take a lot of network bandwidth, it is likely to make the nodes between heartbeat timeout. And the default heartbeat interval is relatively too often (1s tested once) 


This configuration will greatly ease the time-out problems between nodes

 

Guess you like

Origin www.cnblogs.com/xingxia/p/Elasticsearch_optimize.html