Netty design pattern and source code analysis (3)

accept event

The NioEventLoop.run method finally executes the processSelectedKeys method, because at this time we have a client connection event, after we have an accept event, there will be SelectedKeys, and we will operate SelectedKeys at this time.

 private void processSelectedKeys() {
    
    
     if (selectedKeys != null) {
    
    
         //执行这里
         processSelectedKeysOptimized();
     } else {
    
    
         processSelectedKeysPlain(selector.selectedKeys());
     }
 }
private void processSelectedKeysOptimized() {
    
    
    for (int i = 0; i < selectedKeys.size; ++i) {
    
    
        final SelectionKey k = selectedKeys.keys[i];
        // null out entry in the array to allow to have it GC'ed once the Channel close
        // See https://github.com/netty/netty/issues/2363
        selectedKeys.keys[i] = null;

        final Object a = k.attachment();

        if (a instanceof AbstractNioChannel) {
    
    
            processSelectedKey(k, (AbstractNioChannel) a);
        } else {
    
    
           ****************************
        }
        *************************************
    }
}

When there is a client accept event, this logic
Insert picture description here
will be followed. NioServerSocketChannel will call the NioMessageUnsafe.read method.
Insert picture description here
NioMessageUnsafe.read method, we mainly focus on doReadMessages and pipeline.fireChannelRead(readBuf.get(i)); method, as shown in the figure
Insert picture description here

NioServerSocketChannel.doReadMessages

Because we are dealing with the accept event at this time, what the server receives here is the SocketChannel of the client and encapsulates the SocketChannel into a NioSocketChannel in netty.

Insert picture description here

public NioSocketChannel(Channel parent, SocketChannel socket) {
    
    
    super(parent, socket);
    config = new NioSocketChannelConfig(this, socket.socket());
}

Continue to call the parent class

What is listening at this time is the read event, is it consistent with our Nio? Continue to call the parent class, register the read event to the SocketChannel, and associate the concurrent SocketChannel with netty's NioSocketChannel. And call the parent class to initialize the pipeline and unsafe.

protected AbstractNioByteChannel(Channel parent, SelectableChannel ch) {
    
    
     super(parent, ch, SelectionKey.OP_READ);
 }

Store the newly encapsulated NioSocketChannel into the readBuf collection.
Insert picture description here

pipeline.fireChannelRead(readBuf.get(i))

Execute the ChannelRead method of the pipeline. Note that this pipeline is the pipeline of the NioServerSocketChannel. As we said before, the pipeline at this time is roughly like this. This parameter is that the data we fetch in the collection is NioSocketChannel. As mentioned earlier, the ChannelRead method in the pipeline will be executed. Because the ChannelRead of the head still calls the method of the parent class, it continues to call fireChannelRead by default (which is the ChannelRead of the next inbound handler) Method), so we mainly focus on the ChannelRead method of ServerBootstrapAcceptor.

head --> ServerBootstrapAcceptor --> tail
Insert picture description here
Review the code. In the
previous section of logic executed when the server was registered, ServerBootstrapAcceptor was added to the pipeline. Note that the currentChildHandler parameter is the ChannelInitializer object defined by ourselves. We have not added the handler in ChannelInitializer to the handler yet.
Insert picture description here

Insert picture description here

ServerBootstrapAcceptor.ChannelRead

Insert picture description here
Through the previous code, we know that the childHandler is the ChannelInitializer object we created, and
Insert picture description here
this is what the pipeline of the client's NioSocketChannel looks like at this time.

head -> ChannelInitializer (this is a ChannelInitializer object defined by ourselves) -> tail

Note that the initChannel of ChannelInitializer has not been called yet, so our custom handler has not been added to the pipeline. Let's continue to look at the code.

childGroup.register

Just finished the pipeline, we continue to look at the code, what is childGroup, is the work thread group.
Insert picture description here
Take a NioEventLoop from the work thread group and execute the register method.

@Override
 public ChannelFuture register(Channel channel) {
    
    
     return next().register(channel);
 }

同NioServerSockerChannel

@Override
 public ChannelFuture register(Channel channel) {
    
    
     return register(new DefaultChannelPromise(channel, this));
 }

Finally, it is called here to associate NioEventLoop with NioSocketChannel, and then execute the registration event. At this time, the read event is registered.
Insert picture description here
NioSocketChannel registration event

Review the code of these steps again
Insert picture description here
doRegister();
NioSocketChannel gets the SocketChannel at the bottom of the jdk to register the read event to the selector. Note that this selector is the selector associated with NioEventLoop.

pipeline.invokeHandlerAddedIfNeeded();
Through the previous analysis, we know that the pipeline of NioSocketChannel is now like this

head -> ChannelInitializer - > tail

In this step, the initChannel method of ChannelInitializer will be called, and then ChannelInitializer will be deleted. The NioSocketChannel pipeline at this time looks like this

head - > serverHandler - > tail
Insert picture description here
pipeline.fireChannelRegistered()

Client registration success method

Starting from the head, execute the ChannelRegistered method of the next inbound handler. If our handler overrides the ChannelRegistered method, it will execute at this time. Detailed debug code.

pipeline.fireChannelActive();

Client connection active execution method

Starting from the head, execute the ChannelActive method of the next inbound handler. If our handler overrides the ChannelActive method, it will execute at this time. Detailed debug code.

At this point, the accept event is registered

read event

When the connection between the client and the server is established, that is, after accept, the client is required to initiate a data request. Now the read event is executed. Note that the thread that executes the read event at this time is the work thread, and only the work thread will process it. For read and write events, the boss is only responsible for accepting the event and passing the event to a work and registering for the read event. The number of work threads is twice the number of cpu cores. Because the accept event is very fast, one thread can handle thousands of requests, while the work thread takes values ​​one by one according to the next method. Execute one by one.
Insert picture description here
The read event is the same as before. First read data in direct memory. Direct memory has the advantage of saving copy events from kernel space and user space.

The pipeline here is of NioSocketChannel, which is what it looks like

head - > serverHandler - > tail

byteBuf = allocHandle.allocate(allocator): Get direct memory
allocHandle.lastBytesRead(doReadBytes(byteBuf));: ​​Write channel data to the buffer of direct memory

pipeline.fireChannelRead(byteBuf)
executes the method to start reading data, executes the next handler from the head, and passes buf as a parameter.

pipeline.fireChannelReadComplete();
The method of reading the end of the data, starting from the head to execute the next handler

Note: If the handler wants to pass down, you need to execute the pipeline.fireXXXXX() method

Insert picture description here

At this point, now the netty connection event has been completed.

to sum up

What have you learned from netty

  • Master-slave Reactor threading model
  • NIO multiplexing non-blocking
  • Lock-free serialization design ideas
  • Support high-performance serialization protocol
  • Zero copy (use of direct memory)
  • ByteBuf memory pool design
  • Flexible TCP parameter configuration capability
  • Concurrency optimization

Support high-performance serialization protocol

支持想java对象序列化,可以直接传java 对象,netty会自动进行序列化与
反序列化

Lock-free serialization design ideas

      在大多数场景下,并行多线程处理可以提升系统的并发性能。但是,如果对于
共享资源的并发访问处理不当,会带来严重的锁竞争,这最终会导致性能的下降。
为了尽可能的避免锁竞争带来的性能损耗,可以通过串行化设计,即消息的处理
尽可能在同一个线程内完成,期间不进行线程切换,这样就避免了多线程竞争和
同步锁。
       为了尽可能提升性能,Netty采用了串行无锁化设计,在IO线程内部进行
串行操作,避免多线程竞争导致的性能下降。表面上看,串行化设计似乎CPU
利用率不高,并发程度不够。但是,通过调整NIO线程池的线程参数,可以同时启
动多个串行化的线程并行运行,这种局部无锁化的串行线程设计相比一个队列-多
个工作线程模型性能更优。
       Netty的NioEventLoop读取到消息之后,直接调用ChannelPipeline的
fireChannelRead(Object msg),只要用户不主动切换线程,一直会由NioEventLoop
调用到用户的Handler,期间不进行线程切换,这种串行化处理方式避免了多线程操
作导致的锁的竞争,从性能角度看是最优的。

Use of direct memory

advantage

  • Does not occupy heap memory space, reducing the possibility of GC
  • In the implementation of ava virtual machine, local IO will directly manipulate direct memory (direct memory => system call => hard disk/network card), while non-direct memory requires secondary copy (heap memory => direct memory => system call => hard disk) /Network card)

Disadvantage

  • Initial allocation is slow
  • Without the JVM to directly help manage memory, memory overflow is prone to occur. In order to avoid that there has been no FULL GC, the physical memory will eventually be consumed by the direct memory. We can specify the maximum value of direct memory by -XX:MaxDirectMemorySize. When the threshold is reached, call system.gc to perform a FULL GC to indirectly reclaim the unused direct memory.

ByteBuf memory pool design

    随着JVM虚拟机和JIT即时编译技术的发展,对象的分配和回收是个非常轻量级
    的工作。但是对于缓冲区Buffer(相当于一个内存块),情况却稍有不同,特别是
    对于堆外直接内存的分配和回收,是一件耗时的操作。为了尽量重用缓冲区,
    Netty提供了基于ByteBuf内存池的缓冲区重用机制。需要的时候直接从池子里
    获取ByteBuf使用即可,使用完毕之后就重新放回到池子里去。

Example: We use the memory pool + direct memory form when we read data from the channel to buf.

Concurrency optimization

  • Large and correct use of volatile;
  • Widespread use of CAS and atomic classes;
  • The use of thread-safe containers;
  • Improve concurrency performance through read-write locks.

flow chart

Insert picture description here

Guess you like

Origin blog.csdn.net/qq_37904966/article/details/111304910