Detailed explanation of netty components - medium

Following the detailed explanation of netty components in the previous blog -on , we continue to go deep into the source code level to explore the various components of netty and their design ideas:

  1. Netty's built-in communication mode
    When we write netty code, we often use NioServerSocketChannel as the communication mode.
    For example the following simple netty client example:
 private void start() throws InterruptedException {
    
    
 		// 客户端采用java NIO 的通讯模型
        EventLoopGroup group = new NioEventLoopGroup();
        try{
    
    
            Bootstrap client = new Bootstrap();
            client.group(group)
                    .channel(NioSocketChannel.class) // 客户端采用java NIO 的通讯模型
                    .remoteAddress(new InetSocketAddress(host,port)) 
                    .handler(new ChannelInitializer<SocketChannel>() {
    
    
                        @Override
                        protected void initChannel(SocketChannel socketChannel) throws Exception {
    
    
                            socketChannel.pipeline().addLast(new EchoClientHandler());
                            }
                    });
            ChannelFuture sync = client.connect().sync();
            sync.channel().closeFuture().sync();
        }finally {
    
    
            group.shutdownGracefully().sync();
        }
    }

But in addition, netty also has other built-in communication models:
(1) Epoll model, the underlying principle of this method is realized by JNI calling epoll() of linux, so this method can only be debugged on Linux, and the method of use is to replace the following two classes:

private void start() throws InterruptedException {
    
    
        EventLoopGroup group = new EpollEventLoopGroup(); // 客户端采用Epoll模型
        try{
    
    
            // 客户端启动类必备
            Bootstrap client = new Bootstrap();
            client.group(group)
                    .channel(EpollSocketChannel.class) // 客户端采用Epoll模型
                    .remoteAddress(new InetSocketAddress(host,port)) 
                    .handler(new ChannelInitializer<SocketChannel>() {
    
    
                        @Override
                        protected void initChannel(SocketChannel socketChannel) throws Exception {
    
    
                            socketChannel.pipeline().addLast(new EchoClientHandler());
                            }
                    });
            ChannelFuture sync = client.connect().sync();
            sync.channel().closeFuture().sync();
        }finally {
    
    
            group.shutdownGracefully().sync();
        }
    }

But after changing to Epoll mode, we cannot debug on windows, and will report an error: Note:
insert image description here
Epoll mode and NIO mode are both implemented based on the Reactor model. The difference is that Epoll integrates many unique features of the Linux system, such as zero copy, SO_REUSEPORT, and NIO is optimized by JAVA at the JVM level.
(2) OIO io.netty.channel.socket.oio uses the java.net package as the basis - using blocking streams, that is, BIO mode, but this component is basically not used at present. We see that netty's OIO methods have been marked as outdated: (3) Local io.netty.channel.local can communicate locally through pipelines inside the VM. This communication model is rarely used, because if it is already inside a JVM, it can use direct memory. There is absolutely no need to make socket calls
insert image description here
.
(4) Embedded io.netty.channel.embedded Embedded transport, allowing the use of ChannelHandler without the need for a real network-based transport. Mostly used for testing ChannelHandler.
Here is a test case:

  • We define an encoded handler, EmbeddTestHandler

public class EmbeddTestHandler extends MessageToMessageEncoder<ByteBuf> {
    
    
    @Override
    protected void encode(ChannelHandlerContext channelHandlerContext, ByteBuf byteBuf, List<Object> list) throws Exception {
    
    
        // 取字节数组中首位进行编码
        byte[] array = byteBuf.array();
        String s = Arrays.toString(array);
        String s1 = new String(array, StandardCharsets.UTF_8);
        System.out.println(s);
        System.out.println("=================");
        System.out.println(s1);
        list.add(array[0]);
    }
}
  • Define another test class based on Embedded
public class EmbeddTestHandlerTest {
    
    
    @Test
    public void testEmbedded(){
    
    
        ByteBuf byteBuf = Unpooled.buffer();
        String msg = "北京欢迎您";
        byteBuf.writeBytes(msg.getBytes(StandardCharsets.UTF_8));
        //(2) 创建一个EmbeddedChannel,并安装一个测试的EmbeddTestHandler
        EmbeddedChannel channel = new EmbeddedChannel(new EmbeddTestHandler());
        //(3) 写入 ByteBuf,并断言调用 readOutbound()方法将会产生数据
        assertTrue(channel.writeOutbound(byteBuf));
        //(4) 将该 Channel 标记为已完成状态
        assertTrue(channel.finish());
        // read bytes
        //(5) 读取所产生的消息,并断言它包含了编码的值
        Byte code = channel.readOutbound();
        Byte checkCode = msg.getBytes(StandardCharsets.UTF_8)[0];
        assertEquals(code,checkCode);
        assertNull(channel.readOutbound());
    }
}

The handler can be tested without a series of definitions for real network transmission. The test results are as follows:
insert image description here
2. The BootStrap bootstrap class
netty has a Bootstrap for the client and server, where the client is Bootstrap and the server is ServerBootstrap.
Among them, the ServerBootstrap on the server side can use two sets of thread models for listening ports and processing socketChannel:
Here, two sets of thread models for boss and work are defined. The underlying principle is the master-slave mode of the Reactor model. I will explore the Reactor model in depth in the zero-copy and NIO mechanism blog posts

public void start() throws InterruptedException {
    
    
        final MessageCountHandler messageCountHandler = new MessageCountHandler();
        /*使用两个线程组*/
        EventLoopGroup boss  = new NioEventLoopGroup();
        EventLoopGroup work  = new NioEventLoopGroup();
        try {
    
    
            /*服务端启动必备*/
            ServerBootstrap b = new ServerBootstrap();
            b.group(boss,work) // 采用Reactor主从线程模型
            .channel(NioServerSocketChannel.class)/*指定使用NIO的通信模式*/
                    //.option(ChannelOption.SO_BACKLOG)
            .localAddress(new InetSocketAddress(port))/*指定监听端口*/
                   // .childOption(ChannelOption.SO_RCVBUF)
                    //.childOption()
            //.handler();
            .childHandler(new ChannelInitializer<SocketChannel>() {
    
    
                @Override
                protected void initChannel(SocketChannel ch) throws Exception {
    
    
                    ch.pipeline().addLast(new LineBasedFrameDecoder(1024));
                    ch.pipeline().addLast(messageCountHandler); // 添加一个共享的hander到pipeline中
                    ch.pipeline().addLast(new EchoServerMCHandler());
                }
            });
            ChannelFuture f = b.bind().sync();/*异步绑定到服务器,sync()会阻塞到完成*/
            LOG.info("服务器启动完成");
            f.channel().closeFuture().sync();/*阻塞当前线程,直到服务器的ServerChannel被关闭*/
        } finally {
    
    
            boss.shutdownGracefully().sync();
            work.shutdownGracefully().sync();

        }
  1. ChannelInitializer
    In the above Bootstrap bootstrap class example, let's look at this logic:
  .childHandler(new ChannelInitializer<SocketChannel>() {
    
    
                @Override
                protected void initChannel(SocketChannel ch) throws Exception {
    
    
                    ch.pipeline().addLast(new LineBasedFrameDecoder(1024));
                    ch.pipeline().addLast(messageCountHandler); // 添加一个共享的hander到pipeline中
                    ch.pipeline().addLast(new EchoServerMCHandler());
                }
            });

If the client closes the connection and goes offline, the next time it connects, it will be a new connection, the authorization handler will still be installed in the ChannelPipeline, and the authorization check will still be performed. 4. ChannelOption The ChannelOption attribute mainly corresponds to the parameters in the socket: first look at the usage example:
insert image description here

insert image description here

insert image description here





 private void doStart() throws InterruptedException {
    
    
        System.out.println("netty服务已启动");
        // 线程组
        EventLoopGroup group = new NioEventLoopGroup();
        try {
    
    
            // 创建服务器端引导类
            ServerBootstrap server = new ServerBootstrap();
            // 初始化服务器配置
            server.group(group) // 配置处理客户端的连接线程组
                    .channel(NioServerSocketChannel.class) // 指定channel为 NioServerSocketChannel
                    // 为socketChannel配置TCP参数
                    .option(ChannelOption.SO_LINGER,100)
                    .option(ChannelOption.ALLOCATOR, UnpooledByteBufAllocator.DEFAULT)
                    .option(ChannelOption.SO_BACKLOG,100)
                    .option(ChannelOption.SO_REUSEADDR,true)
                    .option(ChannelOption.SO_KEEPALIVE,true)
                    .localAddress(port) // 配置服务端口号
                    // 为每个handler配置TCP参数
                    .childOption(ChannelOption.SO_SNDBUF,1024)
                    .childOption(ChannelOption.SO_RCVBUF,1024)
                    .childHandler(new ChannelInitializer<SocketChannel>() {
    
     // 指定客户端通信的处理类,添加到pipline中,进行初始化
                        @Override
                        protected void initChannel(SocketChannel socketChannel) throws Exception {
    
    
                            socketChannel.pipeline().addLast(new EchoServerHandler());
                        }
                    });
            // 绑定端口,sync()会阻塞到完成
            ChannelFuture sync = server.bind().sync();
            // 阻塞当前线程,直到服务器的ServerChannel被关闭
            sync.channel().closeFuture().sync();
        }finally {
    
    
            // 关闭资源
            group.shutdownGracefully().sync();
        }
    }

It introduces several important parameters:
(1) ChannelOption.SO_REUSEADDR:
ChannelOption.SO_REUSEADDR corresponds to SO_REUSEADDR in the socket option. This parameter indicates that the local address and port are allowed to be reused.
For example, multiple network cards (IP) are bound to the same port. For example, if a process exits abnormally, the port occupied by the program may be occupied for a period of time before it can be used by other processes, and after the program dies, the kernel needs a certain amount of time to release the port. , the port cannot be used normally without setting SO_REUSEADDR.
But note that this parameter cannot make the application bind to the exact same IP + Port to start repeatedly.
(2) ChannelOption.SO_KEEPALIVE
The Channeloption.SO_KEEPALIVE parameter corresponds to SO_KEEPALIVE in the socket option. This parameter is used to set the TCP connection. After setting this option, the connection will test the state of the link. This option is used for connections that may have no data exchange for a long time. After setting this option, if there is no data communication within two hours, TCP will automatically send an activity detection data packet.
(3) ChannelOption.SO_SNDBUF and ChannelOption.SO_RCVBUF
The ChannelOption.SO_SNDBUF parameter corresponds to SO_SNDBUF in the socket option, and the ChannelOption.SO_RCVBUF parameter corresponds to SO_RCVBUF in the socket option. Until the application reads successfully, the send buffer is used to save the sent data until the send is successful.
(4) ChannelOption. SO_LINGER
ChannelOption.SO_LINGER 参数对应于套接字选项中的 SO_LINGER,Linux 内核默认的处理方式是当用户调用 close()方法的时候,函数返回,在可能的情况下,尽量发送数据,不一定保证会发生剩余的数据,造成了数据的不确定性,使用 SO_LINGER 可以阻塞 close()的调用时间,直到数据完全发送
(5)ChannelOption.TCP_NODELAY
ChannelOption.TCP_NODELAY 参数对应于套接字选项中的 TCP_NODELAY,该参数的使用与 Nagle 算法有关,Nagle 算法是将小的数据包组装为更大的帧然后进行发送,而不是输入一次发送一次,因此在数据包不足的时候会等待其他数据的到了,组装成大的数据包进行发送,虽然该方式有效提高网络的有效负载,但是却造成了延时,而该参数的作用就是禁止使用 Nagle 算法,使用于小数据即时传输,于TCP_NODELAY 相对应的是 TCP_CORK,该选项是需要等到发送的数据量最大的时候,一次性发送数据,适用于文件传输。

  1. TCP sticky packets and half packets:
    During network transmission, the client sends packets a, b, and c to the server, but when the server receives the packets, it is the upper half of a+b, the lower half of b+c, and does not receive the complete packets a, b, and c. This problem is called half packets, and the packets are incomplete. Another example is that the packets of a and b are relatively small, and the server receives two packets of a+b and c, in which a and b are combined in one packet, which is called a sticky packet.
    The reason for this is that TCP optimizes the processing of network data transmission. If the network data packet sent is too small, it will enable the Nagle algorithm, merge the smaller data packets and send them again, so that the problem of sticky packets/half packets occurs. That is, packets are unpacked or merged during transmission.
    How to prevent this phenomenon? The solution is to add an identifier or separator to the message to let the server know the status of a complete package.
    (1) Universal newline delimiter for text messages (applicable to messages in the form of one line)
  • Client processing: add carriage return and line feed to each message
 @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
    
    
        ByteBuf msg = null;
        String request = "apple,pear,orange"
                + System.getProperty("line.separator");// 为每个报文末尾添加回车换行符
        for(int i=0;i<10;i++){
    
    
            msg = Unpooled.buffer(request.length());
            msg.writeBytes(request.getBytes());
            ctx.writeAndFlush(msg);
        }

    }

Server-side processing, add carriage return and line feed to process
LineBasedFrameDecoder is a handler that netty has implemented for us to handle carriage return and line feed

 private static class ChannelInitializerImp extends ChannelInitializer<Channel> {
    
    

        @Override
        protected void initChannel(Channel ch) throws Exception {
    
    
        	// 添加回车换行符处理的handler,校验报文的完整性 
            ch.pipeline().addLast(new LineBasedFrameDecoder(1024));
            ch.pipeline().addLast(new LineBaseServerHandler());
        }
    }

(2) Custom delimiter for text message (applicable to a message in the form of a paragraph)

  • Server-side processing, agree on a custom delimiter on the server side:
    DelimiterBasedFrameDecoder is a handler that netty has implemented for us to handle custom delimiters
 // 在服务端约定一个自定义的分隔符
 public static final String My_SYMBOL = "#";
 private static class ChannelInitializerImp extends ChannelInitializer<Channel> {
    
    
        @Override
        protected void initChannel(Channel ch) throws Exception {
    
    
            ByteBuf delimiter = Unpooled.copiedBuffer(My_SYMBOL .getBytes());
            // 服务端添加一个自定义的分隔符处理handler
            ch.pipeline().addLast(new DelimiterBasedFrameDecoder(1024,delimiter));
            ch.pipeline().addLast(new DelimiterServerHandler());
        }
    }
  • Client processing, using the delimiter agreed with the server:
public static final String My_SYMBOL = "#";
 private static class ChannelInitializerImp extends ChannelInitializer<Channel> {
    
    
        @Override
        protected void initChannel(Channel ch) throws Exception {
    
    
            ByteBuf delimiter = Unpooled.copiedBuffer(My_SYMBOL.getBytes());
            ch.pipeline().addLast(new DelimiterBasedFrameDecoder(1024,delimiter));
            ch.pipeline().addLast(new DelimiterClientHandler());
        }
    }

(3) Binary fixed-length identification (applicable to binary messages) agree on the length of each message:

  • Client processing, each time before sending a message to the server, encode the message with a fixed length and send it together:
public final static String REQUEST = "apple.orange,pear";
 @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
    
    
        ByteBuf msg = null;
        for(int i=0;i<10;i++){
    
    
        	// 申请固定长度的buffer
            msg = Unpooled.buffer(REQUEST.length());
            msg.writeBytes(REQUEST.getBytes());
            ctx.writeAndFlush(msg);
        }
    }
  • Server-side processing, adding a binary length field decoder to identify the complete message:
private static class ChannelInitializerImp extends ChannelInitializer<Channel> {
    
    
        @Override
        protected void initChannel(Channel ch) throws Exception {
    
    
        	// 添加二进制长度域解码器来识别完整报文
            ch.pipeline().addLast(new FixedLengthFrameDecoder(FixedLengthEchoClient.REQUEST.length()));
            ch.pipeline().addLast(new FixedLengthServerHandler());
        }
    }


(4) The difference between binary length field decoding channelRead and channelReadComplate
The channelRead method is used to process the data read from the channel each time.
The channelReadComplete method is used to notify the subsequent processing after the data reading operation is completed.
In Netty, channelRead and channelReadComplete are two important methods in the ChannelInboundHandler interface for processing inbound data (data coming in from a remote peer).
The channelRead method is called every time data is read from the channel. When there is data incoming from the remote peer, Netty will automatically pack the data into a ByteBuf object and pass it to the channelRead method of the corresponding ChannelInboundHandler for processing. In this method, you can decode, process, convert, etc. the received data.

@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    
    
    // 在这里处理接收到的数据(msg),通常是 ByteBuf 对象
    // 例如,解码、处理数据等
}

The channelReadComplete method is called when a channel's data read operation is complete. After processing the data in the channelRead method, Netty will automatically call the channelReadComplete method to notify the processor that the data has been read and subsequent operations can be performed, such as sending back a response or releasing resources.

@Override
public void channelReadComplete(ChannelHandlerContext ctx) {
    
    
    // 本次数据读取完成后的后续处理
    // 例如,回送响应或释放资源等
}

Guess you like

Origin blog.csdn.net/weixin_43830765/article/details/131796239