Netty framework learning and springboot integration

I have been exposed to netty since a long time ago, and I have also used netty for network communication docking in several projects, including docking with Internet of Vehicles devices and docking with security hardware devices, so I have always thought about systematically learning netty-related frameworks and implementation principles, etc. Including the early learning of zero-copy technology, it is also preparing for learning netty. Therefore, this article is sorted out on the basis of learning online related technical materials and integrating previously used cases, and records it here to prepare for future in-depth study and provide reference for latecomers. There are inevitably omissions in the article. Hope Corrections from readers are greatly appreciated!

1. Basic concepts of Netty

Netty is a java open source framework provided by JBOSS and is now an independent project on Github. Netty provides an asynchronous, event-driven network application framework and tools for rapid development of high-performance, high-reliability network server and client programs.
In other words, Netty is a NIO-based client and server-side programming framework. Using Netty can ensure that you can quickly and easily develop a network application, such as a client or server application that implements a certain protocol. Netty simplifies and streamlines the programming and development process of network applications, such as socket service development based on TCP and UDP.
"Fast" and "simple" don't have to create maintainability or performance issues. Netty is a project that absorbs the implementation experience of various protocols (including FTP, SMTP, HTTP and other binary text protocols) and has been quite carefully designed. In the end, Netty successfully found a way to ensure the performance, stability and scalability of its applications while ensuring ease of development.

features

(1)一个高性能、异步事件驱动的NIO框架,它提供了对TCP、UDP和文件传输的支持
(2)使用更高效的socket底层,对epoll空轮询引起的cpu占用飙升在内部进行了处理,避免了直接使用NIO的陷阱,简化了NIO的处理方式。
(3)采用多种decoder/encoder 支持,对TCP粘包/分包进行自动化处理
(4)可使用接受/处理线程池,提高连接效率,对重连、心跳检测的简单支持
(5)可配置IO线程数、TCP参数, TCP接收和发送缓冲区使用直接内存代替堆内存,通过内存池的方式循环利用ByteBuf
(6)通过引用计数器及时申请释放不再引用的对象,降低了GC频率
(7)使用单线程串行化的方式,高效的Reactor线程模型
(8)大量使用了volitale、使用了CAS和原子类、线程安全类的使用、读写锁的使用

performance

(1)更高的吞吐量,更低的延迟
(2)减少资源消耗
(3)最小化不必要的内存复制

Safety

完整的SSL / TLS和StartTLS支持

2. Netty Framework

2.1 Netty framework structure

Official website picture

As an asynchronous event-driven network, Netty's high performance mainly comes from its I/O model and thread processing model. The former determines how to send and receive data, and the latter determines how to process data.

2.1 Netty NIO

  1. NIO concept
    NIO: synchronous non-blocking IO, the server implementation mode is that one thread processes multiple requests, and the link requests sent by the client will be registered on the multiplexer, and the multiplexer polls the link and performs IO requests deal with.

  2. NIO core components
    NIO mainly has three core parts: Channel (channel), Buffer (buffer), Selector (selector).
    NIO operates based on Channel and Buffer. Data is always read from the channel to the buffer or written from the buffer to the channel. Selector is used to monitor events of multiple channels (for example: connection opening, data arrival). It is therefore possible to listen to multiple data pipes using a single thread.
    The relationship between Selector, Channel, and Buffer in NIO:
    1) Each channel corresponds to a buffer.
    2) A selector corresponds to a thread, and a thread corresponds to multiple channels.
    3) Which channel the program switches to is determined by Event.
    4) The selector will switch on each channel according to different events.
    5) buffer is a memory block with an array at the bottom.
    6) Data is read and written through the buffer, the buffer can be switched to read and write through the flip method, but the BIO is a one-way output, either an input stream or an output stream.
    7) The channel is bidirectional, and can return the situation of the underlying operating system, such as linux, and the underlying operating system channel is bidirectional.

  3. There are three methods for the underlying implementation of the Selector.
    Linux supports IO multiplexing with select poll epoll, and finally chooses epoll.
    The benefits of epoll:
    1) Supports unlimited fds opened by a process (of course less than the maximum support handle of the OS)
    select The biggest defect: the process FD opened by a single process has a limit, the default is 1024, which is too small for a server that supports 10,000 TCPs.
    cat /proc/sys/fs/file -max Look at the largest handle, 1G memory machine has about 10W handle
    2) IO does not decrease linearly with the increase of FD number
    Traditional select/poll: the socket collection is very large, due to network delay and link idle, less Some sockets are active, select/poll scans linearly, and the efficiency decreases linearly.
    Epoll: only operates on active sockets. Epoll is implemented according to the callback function on fd. Only active sockets will actively call callbacks, and idle sockets will not.
    When there are many active sockets, select/poll is more efficient than epoll, and when there are few active sockets, epoll is more efficient.
    3) Use mmap to accelerate the messages between the kernel and user space.
    epoll implements the same block of memory through the kernel and user space mmap
    4) epoll Api is relatively simple

insert image description here
insert image description here

2.2 Reactor threading model

The Netty threading model is a typical Reactor model structure.

  1. Reactor threading model
    There are three commonly used Reactor threading models, namely: Reactor single-threaded model, Reactor multi-threaded model and master-slave Reactor multi-threaded model.
    1) Reactor single-threaded model
    The Reactor single-threaded model means that all IO operations are completed on the same NIO thread. As the NIO server, it receives the client's TCP connection, as the NIO client, it initiates a TCP connection to the server, reads the request from the communication peer or sends a message request or response message to the communication peer.
    Since Reactor mode uses asynchronous non-blocking IO, all IO operations will not cause blocking. In theory, a thread can independently handle all IO-related operations.
    2) Reactor multi-thread model
    For some small-capacity application scenarios, the single-thread model can be used, but it is not suitable for high-load, high-concurrency applications. This model needs to be improved and evolved into a Reactor multi-thread model.
    The biggest difference between the Rector multi-threaded model and the single-threaded model is that there is a set of NIO threads to handle IO operations.
    In this model, there is a special NIO thread-Acceptor thread used to monitor the server and receive the client's TCP connection request; and one NIO thread can process N links at the same time, but one link only corresponds to one NIO thread, Prevent concurrent operation problems from occurring.
    Network IO operations - reading, writing, etc. are in charge of a NIO thread pool, which can be implemented using a standard JDK thread pool, which contains a task queue and N available threads, and these NIO threads are responsible for message reading, decoding, Encode and send.
    3) Master-slave Reactor multithreading model
    In the case of extremely high concurrency, a single Acceptor thread may have insufficient performance. In order to solve the performance problem, a master-slave Reactor multi-threading model is generated.
    The characteristics of the master-slave Reactor thread model are: the server is no longer a separate NIO thread for receiving client connections, but an independent NIO thread pool.
    After the Acceptor receives the client TCP connection request and completes the processing, it registers the newly created SocketChannel to an IO thread in the IO thread pool (sub reactor thread pool), which is responsible for the reading, writing and encoding and decoding of the SocketChannel.
    The Acceptor thread pool is only used for client login, handshake and security authentication. Once the link is successfully established, the link will be registered to the IO thread of the backend subReactor thread pool, and the IO thread will be responsible for subsequent IO operations.

  2. Netty thread model
    The Netty thread model is based on the master-slave reactor multi-thread mode, and a certain degree of optimization has been made on this basis: 1
    ) The BossGroup thread pool maintains the main Selector and only focuses on accept
    2) After receiving the accept event, Obtain the corresponding SocketChannel, encapsulate it into a NioSocketChannel and register it to the Worker thread loop
    3) After the worker thread monitors the time it is interested in, it is handed over to the handler for processing

  3. Netty Reactor thread execution process
    Image Credits References Blog, Citing Learning

1. Netty abstracts two thread pools: BossGroup is responsible for monitoring and establishing connections; WorkerGroup is responsible for reading and writing network IO
2. Both BossGroup and WorkerGroup are NioEventLoopGroup, which is equivalent to an event loop group, which contains multiple event loops. Each event loop is NioEventLoop
3, NioEventLoop represents a selector, the user listens to the socket network communication bound to it
4, each Boos NioEventLoop cycle executes 3 steps:
a, poll accept event
b, establish connection, generate NioSocketChannel, And register to the workerGroup
c. Process tasks in the task queue, that is, RunAllTasks
5. Each Worker NioEventLoop executes 3 steps:
a. Polling for read and write time
b. Process IO time and process on the corresponding NioSocketChannel
c. Process tasks Queue tasks, namely RunAllTasks

3. Springboot integrates netty

3.1 Introducing jar package dependencies

<dependency>
   <groupId>io.netty</groupId>
   <artifactId>netty-all</artifactId>
   <version>4.1.28.Final</version>
</dependency>

3.2 Server

// netty server类
@Component
public class NettyServer {

    @Value("${netty-port}")
    private int port;

    public void start() throws InterruptedException {
        /**
         * 创建两个线程组 bossGroup 和 workerGroup
         * bossGroup 只是处理连接请求,真正的和客户端业务处理,会交给 workerGroup 完成
         *  两个都是无线循环
         */
        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
            //创建服务器端的启动对象,配置参数
            ServerBootstrap bootstrap = new ServerBootstrap();
            //设置两个线程组
            bootstrap.group(bossGroup, workerGroup)
                    //使用NioServerSocketChannel 作为服务器的通道实现
                    .channel(NioServerSocketChannel.class)
                    //设置线程队列得到连接个数
                    .option(ChannelOption.SO_BACKLOG, 128)
                    //设置保持活动连接状态
                    .childOption(ChannelOption.SO_KEEPALIVE, true)
                    //通过NoDelay禁用Nagle,使消息立即发出去,不用等待到一定的数据量才发出去
                    .childOption(ChannelOption.TCP_NODELAY, true)
                    //可以给 bossGroup 加个日志处理器
                    .handler(new LoggingHandler(LogLevel.INFO))
                    //给workerGroup 的 EventLoop 对应的管道设置处理器
                    .childHandler(new ChannelInitializer<SocketChannel>() {
                        //给pipeline 设置处理器
                        @Override
                        protected void initChannel(SocketChannel socketChannel) throws Exception {
                            ChannelPipeline pipeline = socketChannel.pipeline();
                            pipeline.addLast(new StringEncoder());//对 String 对象自动编码,属于出站站处理器
                            pipeline.addLast(new StringDecoder());//把网络字节流自动解码为 String 对象,属于入站处理器
                            pipeline.addLast(new LengthFieldBasedFrameDecoder(24*1024,0,2));
                            pipeline.addLast(new NettyServerHandler());
                        }
                    });

            //启动服务器并绑定一个端口并且同步生成一个 ChannelFuture 对象
            ChannelFuture cf = bootstrap.bind(port).sync();
            if (cf.isSuccess()) {
                System.out.println("socket server start---------------");
            }
            //对关闭通道进行监听
            cf.channel().closeFuture().sync();
        } finally {
            //发送异常关闭
            bossGroup.shutdownGracefully();
            workerGroup.shutdownGracefully();
        }
    }
}

// handler类
public class NettyServerHandler extends SimpleChannelInboundHandler<Object> {
    private static final Logger log = LoggerFactory.getLogger(NettyServerHandler.class);

    protected void channelRead0(ChannelHandlerContext context, Object obj) throws Exception {
        log.info(">>>>>>>>>>>服务端接收到客户端的消息:{}",obj);
        SocketChannel socketChannel = (SocketChannel) context.channel();
        /**
         * 服务器返回客户端消息
         */
        Map map =  new HashMap();
        map.put("msg","我是服务端,收到你的消息了");
        socketChannel.writeAndFlush(JSON.toJSONString(map));
        ReferenceCountUtil.release(obj);
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        ctx.close();
    }
}


// springboot 集成启动 netty server,同时不影响tomcat接口
@Component
public class NettyBoot implements CommandLineRunner {

    @Autowired
    private NettyServer nettyServer;

    public void run(String... args) throws Exception {
        try {
            nettyServer.start();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

3.3 Client

// netty client客户端
@Component
public class NettyClient {

    private int port = 9999;
    private String host = "localhost";
    public Channel channel;

    public void start() {
        EventLoopGroup eventLoopGroup = new NioEventLoopGroup();
        Bootstrap bootstrap = new Bootstrap();
        try {
            bootstrap.group(eventLoopGroup)
                    .channel(NioSocketChannel.class)
                    .option(ChannelOption.SO_KEEPALIVE, true)
                    .remoteAddress(host, port)
                    .handler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        protected void initChannel(SocketChannel socketChannel) throws Exception {
                            ChannelPipeline pipeline = socketChannel.pipeline();
                            pipeline.addLast(new StringEncoder());//对 String 对象自动编码,属于出站站处理器
                            pipeline.addLast(new StringDecoder());//把网络字节流自动解码为 String 对象,属于入站处理器
                            pipeline.addLast(new LengthFieldBasedFrameDecoder(24*1024,0,2));
                            pipeline.addLast(new NettyClientHandler());
                        }
                    });
            ChannelFuture future = bootstrap.connect(host, port).sync();
            if (future.isSuccess()) {
                channel = future.channel();
                System.out.println("connect server  成功---------");
            }
//            给关闭通道进行监听
            future.channel().closeFuture().sync();
        }catch (Exception e){
            e.printStackTrace();
        } finally {
            eventLoopGroup.shutdownGracefully();
        }
    }

    public void sendMsg(String msg) {
        this.channel.writeAndFlush(msg);
    }
}

// handler处理类
public class NettyClientHandler extends SimpleChannelInboundHandler<Object> {
    private static final Logger log = LoggerFactory.getLogger(NettyClientHandler.class);
    @Override
    public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
        log.info(">>>>>>>>连接");
    }

    @Override
    public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
        log.info(">>>>>>>>退出");
    }

    @Override
    public void userEventTriggered(ChannelHandlerContext ctx, Object evt) {
        log.info(">>>>>>>>>>>>>userEventTriggered:{}", evt);
    }

    /**
     * 客户端接收到服务端发的数据
     * @param channelHandlerContext
     * @param obj
     * @throws Exception
     */
    @Override
    protected void channelRead0(ChannelHandlerContext channelHandlerContext, Object obj)  {
        log.info(">>>>>>>>>>>>>客户端接收到消息:{}", obj);
        ReferenceCountUtil.release(obj);
    }

    /**
     * socket通道处于活动状态
     * @param ctx
     * @throws Exception
     */
    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        log.info(">>>>>>>>>>socket建立了");
        super.channelActive(ctx);
    }

    /**
     * socket通道不活动了
     * @param ctx
     * @throws Exception
     */
    @Override
    public void channelInactive(ChannelHandlerContext ctx) throws Exception {
        log.info(">>>>>>>>>>socket关闭了");
        super.channelInactive(ctx);
    }
    
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        ctx.close();
    }
}

// springboot集成,同时不影响tomact接口
@Component
public class NettyBoot implements CommandLineRunner {

    @Autowired
    private NettyClient nettyClient;

    public void run(String... args) throws Exception {
        nettyClient.start();
    }
}

4. References

https://netty.io/
https://www.cnblogs.com/telwanggs/p/12119697.html
https://blog.csdn.net/lmdsoft/article/details/105618052
https://www.infoq.cn/article/netty-threading-model

Guess you like

Origin blog.csdn.net/shy871/article/details/117597987