Receiving data: adaptive buffer and read in order to solve the connection problem

Receiving data: adaptive buffer and read in order to solve the connection problem

Netty series catalog ( https://www.cnblogs.com/binarylei/p/10117436.html )

So far, we have to start the service, and receives a client connection, the two sides can already formally communicate. The following process will request: receiving data, service processing, the transmission data.

1. Analysis of the main line

1.1 Data read tips

Receiving the data we encounter the following problems:

  1. How to allocate a buffer size. Waste of space allocation size, small distribution and the need for frequent expansion. How can we achieve adaptive allocation buffer size?
  2. How to deal with high concurrency. If a single connection to read too long, then the number of concurrent requests will be greatly reduced. We need to limit the time a single connection process. In fact, if you want to deal with high concurrency, a key factor is: processing time for each request is very short.

Let's look at how Netty solve these two problems. This part is the core of this section content. Of course transmit data also have the same problem, how do write too much data, you can compare the internal receive data and send data up two sections of the study.

  1. Adaptive data size distributor (AdaptiveRecvByteBufAllocator) :

    The recent request packet size, the packet size at a guess. AdaptiveRecvByteBufAllocator speculation ByteBuf of: amplifying decisive reduction caution (requires 2 consecutive determination)

  2. Continuous reading (defaultMaxMessagesPerRead) :

    The default per connection 16 is connected to read data, even if there are data not processed temporarily, a connection process to the next.

1.2 mainline

NioEventLoop constantly polling, event reception OP_READ; then the read data is spread out through pipeline.fireChannelRead (byteBuf).

  1. Multiplexer (Selector) received event OP_READ
  2. Processing OP_READ event: NioSocketChannel.NioSocketChannelUnsafe.read ()
    • An initial allocation of 1024 bytes of data to receive byte buffer
    • Channel to receive data from the byte buffer
    • The actual size of the data record to accept, adjust the allocation of the next byte buffer size
    • Trigger pipeline.fireChannelRead (byteBuf) to read out data propagation
    • Determine whether to accept the byte buffer rewarding experience: Yes, try to continue until there is no data to read, or over 16 times; no, the end of the current round of reading, waiting for the next event OP_READ
NioEventLoop#run
    -> processSelectedKeys
        -> AbstractNioMessageChannel.NioMessageUnsafe#read
            -> NioServerSocketChannel#doReadMessages
            -> pipeline#fireChannelRead

1.2 knowledge points

(1) the nature of the data read

  • sun.nio.ch.SocketChannelImpl#read(java.nio.ByteBuffer)

(2) fireChannelReadComplete and fireChannelRead relations

  • pipeline.fireChannelReadComplete (): a read event triggers an event.
  • pipeline.fireChannelRead (byteBuf): parse each record trigger an event.

A data fetch data may be multiple records, each triggers a fireChannelRead event, but a read only once fireChannelReadComplete trigger event.

(3) adaptive buffer size

AdaptiveRecvByteBufAllocator speculation byteBuf of: amplifying decisive reduction caution (requires 2 consecutive determination)

(4) high concurrent processing

The default can only read 16 times. "The rain descends"

2. Source analysis

In the previous section, we know Netty for OP_READ and OP_ACCEPT event is a unified process. Except that the receiving client is connected using NioMessageUnsafe # read, the data is read using NioByteUnsafe # read.

Receiving data 2.1

We will focus on analysis NioByteUnsafe # read this method. Netty each read data should be divided into the following steps:

  • Allocate buffers : default 1024 byte, then the packet size according to the recent request, a packet size of the next guess.
  • Data read : nothing to say, direct calls to the underlying code of Java nio.
  • Trigger pipeline.fireChannelRead (byteBuf) : business processes.
  • Determine whether to continue reading : there are two standards, one can not exceed the maximum number of reads (default 16); the second is the data buffer read full every time, such as allocation of 2 KB ByteBuf, you must read 2 KB The data.
@Override
public final void read() {
    final ChannelConfig config = config();
    final ChannelPipeline pipeline = pipeline();
    final ByteBufAllocator allocator = config.getAllocator();
    final RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();
    // 每次读取数据时,都重新开始计数
    allocHandle.reset(config);

    ByteBuf byteBuf = null;
    boolean close = false;
    try {
        do {
            // 1. 分配缓冲区,大小自适应
            byteBuf = allocHandle.allocate(allocator);
            // 2. 从 socket revbuf 中接收数据
            allocHandle.lastBytesRead(doReadBytes(byteBuf));
            if (allocHandle.lastBytesRead() <= 0) {
                byteBuf.release();
                byteBuf = null;
                close = allocHandle.lastBytesRead() < 0;
                if (close) {
                    readPending = false;
                }
                break;
            }

            allocHandle.incMessagesRead(1);
            readPending = false;
            // 3. 触发事件处理
            pipeline.fireChannelRead(byteBuf);
            byteBuf = null;
            // 4. 判断是否继续读
        } while (allocHandle.continueReading());

        allocHandle.readComplete();
        pipeline.fireChannelReadComplete();

        if (close) {
            closeOnRead(pipeline);
        }
    } catch (Throwable t) {
        handleReadException(pipeline, byteBuf, t, close, allocHandle);
    } finally {
        if (!readPending && !config.isAutoRead()) {
            removeReadOp();
        }
    }
}

Note: you can see, the adaptive buffer size distribution and whether to continue reading these two important functions when receiving data entrusted to allocHandle. Netty is the default allocHandle AdaptiveRecvByteBufAllocator.

doReadBytes method for reading data from the socket revbuf, but are required before each read buffer size of the writable area, for determining whether the read buffer is full, then decide whether to continue reading data.

// NioSocketChannel
@Override
protected int doReadBytes(ByteBuf byteBuf) throws Exception {
    final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
    // 每次读取数据前,记录缓冲区中可写区域大小,判断是否将缓冲区读满
    allocHandle.attemptedBytesRead(byteBuf.writableBytes());
    return byteBuf.writeBytes(javaChannel(), allocHandle.attemptedBytesRead());
}

2.2 AdaptiveRecvByteBufAllocator

Before analyzing the code, let's compare the difference ByteBufAllocator and RecvByteBufAllocator of:

  • ByteBufAllocator: allocates a buffer, can be divided into pools and non-pooled, buffer and direct and indirect two kinds of buffers. The default is PooledDirectByteBuf.
  • AdaptiveRecvByteBufAllocator: Since each buffer should be used to determine the buffer allocation size, and whether to continue reading.

AdaptiveRecvByteBufAllocator just responsible for creating Handle, real functions are delegated to Handle process. See related default configuration DefaultChannelConfig.

(1) the allocated buffer

@Override
public ByteBuf allocate(ByteBufAllocator alloc) {
    return alloc.ioBuffer(guess());
}

Note: you can see, it allocates buffer directly entrusted to ByteBufAllocator. AdaptiveRecvByteBufAllocator only by GUESS () method determines the buffer size distribution.

(2) updating the adaptive buffer size

GUESS () method returns nextReceiveBufferSize variable size, default is 1024 byte. Every time the minimum read 64 byte, maximum 64 KB.

static final int DEFAULT_MINIMUM = 64;
static final int DEFAULT_INITIAL = 1024;
static final int DEFAULT_MAXIMUM = 65536;

After allocHandle.lastBytesRead read data (doReadBytes (byteBuf)) each call, the packet size will be determined according to a read buffer size scaling capacity.

@Override
public void lastBytesRead(int bytes) {
    // attemptedBytesRead为读取前可写缓冲区大小,bytes表示当前读取的数据包大小。
    // 如果二者相等,说明 socket revbuf 中还有数据可读,判断是否扩缩容
    if (bytes == attemptedBytesRead()) {
        // 核心方法:判断是否扩容或缩容
        record(bytes);
    }
    super.lastBytesRead(bytes);
}

(3) adaptive buffer strategy

record method is the core, the expansion or contraction calculated AdaptiveRecvByteBufAllocator buffer content policy.

Record prior to analysis methods, we look at how the buffer size is allocated. The 512 byte buffer AdaptiveRecvByteBufAllocator by the partition, the expansion or contraction by 16 byte capacity is smaller than 512 byte, 512 byte for expansion is greater than twice the size press or volume reduction. That is [16, 32, 48, ..., 512, 1024, 2048, .., Integer.MAX_VALUE] , which is SIZE_TABLE, the buffer size must be allocated each time a value of the array.

private void record(int actualReadBytes) {
    // 缩容
    if (actualReadBytes <= SIZE_TABLE[max(0, index - INDEX_DECREMENT - 1)]) {
        if (decreaseNow) {
            index = max(index - INDEX_DECREMENT, minIndex);
            nextReceiveBufferSize = SIZE_TABLE[index];
            decreaseNow = false;
        } else {
            decreaseNow = true;
        }
    // 扩容
    } else if (actualReadBytes >= nextReceiveBufferSize) {
        index = min(index + INDEX_INCREMENT, maxIndex);
        nextReceiveBufferSize = SIZE_TABLE[index];
        decreaseNow = false;
    }
}

Description: When the expansion or contraction of the record content, are readjusted nextReceiveBufferSize value.

Adaptive overall strategy is: Enlarge decisive, reduce cautious . I.e., the volume reduction conditions require twice continuously, but only need to read an expansion times. It is to be noted that, INDEX_INCREMENT = 4, and INDEX_DECREMENT = 1, such as 512 KB, KB * 2 if 512 . 4 expansion condition is satisfied, and 512/2 . 1 is met volume reduction conditions.

(4) Continue reading

private final UncheckedBooleanSupplier defaultMaybeMoreSupplier = ()-> 
    attemptedBytesRead == lastBytesRead;

@Override
public boolean continueReading(UncheckedBooleanSupplier maybeMoreDataSupplier) {
    return config.isAutoRead() &&
        (!respectMaybeMoreData || maybeMoreDataSupplier.get()) &&
        totalMessages < maxMessagePerRead &&
        totalBytesRead > 0;
}

Description: continueReading default parameter is defaultMaybeMoreSupplier. If you continue reading the following is required:

  1. autoRead = ture: Default is true (DefaultChannelConfig).
  2. maybeMoreDataSupplier: a read on to determine whether the write buffer is full. If filled, it could indicate there is more data, you can continue reading.
  3. maxMessagePerRead: represents the maximum number of times to read, the default is 16. Each time data is read from totalMessages will increase, when more than 16 times, stop reading. Avoid a connection data is very large, with a long point resources.
  4. totalBytesRead: total number of bytes read.

The intentions of recording a little bit every day. Perhaps the content is not important, but the habit is very important!

Guess you like

Origin www.cnblogs.com/binarylei/p/12640521.html