elasticsearch Why can only allocate the maximum memory 32G

    elasticsearch allocates memory:

        1, set the variable ways: export ES_HEAP_SIZE = 32G the better way

        2, was added when the starting number Start es difference: -Xmx 32G -Xms 32G, and Xms size Xmx best as to prevent changing the size of the program at runtime.

    

    es maximum 32G memory allocation reasons:

    1, the memory is absolutely important for Elasticsearch, the data for additional memory provides faster operation. And there is a large memory consumption -Lucene

Lucene is designed to cache the data in the underlying OS into memory. Lucene segments are separately stored in a single file, and these files are not changed, so it is conducive to the cache, while the operating system will put these segments cached files for faster access.

Lucene performance depends on the interaction and the OS, if you put all the memory allocated to Elasticsearch, without leaving to Lucene, then your performance will be poor full-text search.

Finally, the recommended standard is 50% of the memory to elasticsearch, the remaining 50% will not be useless, Lucene will soon engulf the rest of this memory. Do not exceed 32G

    2, no large memory allocated to elasticsearch, when in fact jvm memory is less than 32G will employ a memory object pointer compression.

In java, all objects on the heap, then it has a pointer reference. These pointers point to objects the size of the CPU is usually the size of the word, not 32bit is 64bit, depending on your processor, the pointer points to the exact location of your values.

For 32-bit system, the maximum available memory 4G. 64 systems can use more memory. But the 64-bit pointer means more waste, because your pointer itself is big. Memory is not wasted, even worse, the greater pointers between main memory and cache data while on the move, take up more bandwidth.

java using a pointer called memory compression technology to solve this problem. It no longer represents a pointer to the precise location of the object in memory, but rather offset. This means that 32-bit pointers can refer to four billion objects, rather than four billion bytes. Finally, the heap memory that is grow to 32G physical memory, may be expressed in the 32bit pointer.

Once you cross that boundary magical 30-32G, the pointer will switch back to normal object pointer, the pointer of each object are longer, it will use more CPU memory bandwidth, which means you actually lose more memory. When the memory reaches 40-50GB fact, the effective use of memory before memory object pointer corresponding to the compression technique when the memory 32G.

That is the meaning of this description: Even if you have enough memory, try not to exceed 32G, because it is a waste of memory, reducing the CPU performance, but also to deal with large memory GC.


Guess you like

Origin blog.51cto.com/12182612/2429606