Netty study notes 2--ByteBuf class structure

When I was working on the previous project, I always wondered about the creation of ByteBuf. Because I was busy, I randomly found a method that can create ByteBuf and used it. Let’s summarize it here today.

First, for the class inheritance diagram of ByteBuf, see the attached netty class inheritance diagram.

From the perspective of memory allocation, ByteBuf can be divided into two categories:

(1) Heap ByteBuf byte buffer: It is characterized by fast memory allocation and recycling, and can be automatically recycled by JVM; the disadvantage is that if I/O reading and writing of sockets requires an additional memory copy, the heap memory The corresponding buffer is copied to the kernel Channel, and the performance will be degraded to a certain extent.

(2) Direct Memory (DirectByteBuf) byte buffer: It is characterized by non-heap memory, which allocates memory outside the heap. Compared with heap memory, its allocation and recovery speed is relatively slow, but it is written to or from Socket Channel. When reading in the middle, there will be one less memory copy, which is faster than heap memory.

Experience shows that the best practice of ByteBuf is to use DirectByteBuf for the read and write buffers in the I/O communication thread, and use HeapByteBuf for the encoding and decoding modules of back-end business messages.

From the perspective of memory recycling, ByteBuf is also divided into two categories: ByteBuf based on object pool and ordinary ByteBuf. The main difference between the two is that the object pool-based ByteBuf can reuse ByteBuf objects. It maintains a memory pool and can recycle the created ByteBuf to improve memory usage efficiency and reduce frequent GC due to high load. Therefore, it is recommended to use the object pool ByteBuf in the case of high load and large concurrency.

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326608257&siteId=291194637