JVM heap and local memory comparison

Java class instances are generally allocated on the JVM heap, and Java uses JNI to call C code to implement Socket communication. Then, where is the memory required by the C code allocated during its operation? Can C code directly manipulate the Java heap?

In order to answer these questions, let me first talk about the relationship between JVM and user processes. If you want to run a Java class file, you can use the following Java command to execute

 

java my.class

The java in this command line is actually an executable program, this program will create a JVM to load and run your Java classes .

The operating system creates a process to execute this java executable program, and each process has its own virtual address space. The memory (including heap, stack, and method area) used by the JVM is allocated from the virtual address space of the process . Please note that the JVM memory is only a part of the process space, in addition to the code segment, data segment, memory mapping area, kernel space, etc. in the process space. From the perspective of the JVM, the part outside the JVM memory is called local memory, and the memory used by the C program code in the running process is allocated in the local memory. Let's understand it through a picture.


 

What is the difference between HeapByteBuffer and DirectByteBuffer? The HeapByteBuffer object itself is allocated on the JVM heap, and the byte array byte[] it holds is also allocated on the JVM heap.

But if you use HeapByteBuffer to receive network data, you need to copy the data from the kernel to a temporary local memory, and then copy from the temporary local memory to the JVM heap , instead of directly copying from the kernel to the JVM heap. Why is this? This is because during the process of copying data from the kernel to the JVM heap, the JVM may have GC, and the object may be moved during the GC process, which means that the byte array on the JVM heap may be moved, in which case the Buffer address will become invalid Up. If there is a local memory transfer in the process, the JVM can guarantee that no GC will be performed during the copy process from the local memory to the JVM heap.

If you use HeapByteBuffer, you will find that there is an extra layer of transfer between the JVM heap and the kernel, and DirectByteBuffer is used to solve this problem. The DirectByteBuffer object itself is on the JVM heap, but the byte array it holds is not allocated from the JVM heap. , But allocated from local memory.
There is a long type field address in the DirectByteBuffer object, which records the address of the local memory, so that when receiving data, the local memory address is directly passed to the C program, and the C program will copy the network data from the kernel to the local memory, JVM This local memory can be read directly. This method has one less copy than HeapByteBuffer, so in general, its speed will be several times faster than HeapByteBuffer. You can deepen your understanding through the diagram above.

Then why is the performance of HeapByteBuffer worse than DirectByteBuffer still in use?
This is because local memory is not well managed, and memory leaks are difficult to locate. For stability
reasons , HeapByteBuffer is better.



 
 
 


 

Guess you like

Origin blog.csdn.net/Erica_1230/article/details/106512688