Message Queue (vi) RocketMQ-RPC communications model Netty multithreaded

I. Why use Netty as a high-performance communication library? Looking RocketMQ RPC traffic part time, many students may have questions like this, why RocketMQ to choose Netty not directly use the JDK NIO network programming it? Here it is necessary to briefly introduce Netty.

Netty package is a high-performance network communication of open source framework JDK NIO library. It provides asynchronous, event-driven network application framework and tools to rapidly develop high-performance, high-reliability network server and client programs.

The following lists the main reason for RPC communication module under the general system will choose Netty as the underlying communication library (authors believe RocketMQ of RPC is also based on this choice the Netty):

Simple (1) Netty programming API use, low threshold of development without programmers to pay attention and understand much NIO programming models and concepts;

(2) for the programmer, it can be customized according to the requirements of business developed for flexible customization of extension of the communication frame by a ChannelHandler Netty;

(3) Netty framework itself supports unpacking / unpacking, anomaly detection mechanisms, so that programmers can escape from the tedious details of JAVA NIO, but only need to focus on business logic;

(4) Netty resolved (to be exact use of another perfect way to circumvent) JDK NIO's Bug (Epoll bug, will lead Selector empty polling, resulting in 100% CPU);

(5) Netty framework for internal threads, selector do some optimization details, reactor model designed multi-threaded, can achieve very efficient concurrent processing;

. 6) has a plurality of open source projects Netty (RPC framework of the Hadoop avro Netty used as a communication framework) and have been fully verified, robustness / reliability is better.

Two, Netty multithreaded model RocketMQ in RPC communication

RPC communication part RocketMQ using the "1 + N + M1 + M2" of Reactor multi-threaded mode, network traffic for a certain part of the expansion and optimization This section let us look at the specific design and implementation of this part content.

2.1, Reactor design concept of multi-threading model Netty Descriptions

Here it is necessary to briefly introduce Reactor multithreaded model of Netty. Reactor design multi-threading model is divide and conquer + event-driven.

(1) Divide and conquer

Generally, a network connection request to complete the process can be divided into acceptable (Accept), data read (Read), decode / encode (decode / encode), business process (Process), transmits a response (Send) these steps . Reactor model mapped to each step in a task, the minimum thread execution logic unit server is no longer a complete network requests, but this task, and is employed to perform a non-blocking manner.

(2) Event-driven

Each task corresponds to a specific network events. When a task is ready, Reactor receive the corresponding network event notification and task distributed to bind the corresponding network events Handler execution.

2.2 Design and Implementation multithreaded Reactor 1 + N + M1 + M2 in the RPC communications RocketMQ

(1) Reactor design and process RocketMQ multithreaded communication in RPC

RocketMQ of RPC communication using Netty components as the underlying communication library, also followed the Reactor multi-threaded model, while at the top of this to do some expansion and optimization. The following first give a RocketMQ of RPC communication layer Netty multi-threaded model framework map, so that we can separate multi-threaded RPC communication RocketMQ in design to have a general understanding.

From the above block diagram can be generally understood RocketMQ NettyRemotingServer in the multithreaded Reactor Model. A Reactor main thread (eventLoopGroupBoss, namely 1 above) is responsible for monitoring TCP network connection request to establish the connection after a good throw Reactor thread pool (eventLoopGroupSelector, that is, the above "N", the source code default setting is 3), which is responsible for will establish a socket connection is registered to a good selector up (RocketMQ source code will be automatically selected according to the type of OS NIO and Epoll, can also be configured by parameter), then listens for the real network data. After getting the data network, and then throw Worker thread pool (defaultEventExecutorGroup, that is, the above "M1", the source is set to the default 8). For a more efficient processing of network requests RPC, where the Worker thread pool is designed for use with a network communication-related Netty (including coding / decoding, idle link management, network management, and network connection request processing). And the processing operations performed on the operational thread pool, RomotingCommand code CODE in accordance with the service request to the local cache processorTable variable to find the corresponding processor, and then packaged into a task after the task, submitted to the corresponding service processor thread pool to execute the processing ( sendMessageExecutor, to send the message as an example, namely the above "M2").

The following table lists the manner described above under "1 + N + M1 + M2" Reactor Model multithreaded

(2) Reactor RocketMQ multithreaded code embodied in RPC communication

Having a multi-threaded Reactor and the overall design process, we should just Netty part of RPC communication to RocketMQ have a more comprehensive understanding, that next to some details from the source point of view (see the part of the code in time readers need to understand the concepts JAVA NIO and Netty and technical points). In the instance NettyRemotingServer initialization, initializes all relevant variables include serverBootstrap, nettyServerConfig parameters, channelEventListener listener while initializing eventLoopGroupBoss and eventLoopGroupSelector two Netty's EventLoopGroup thread pool (It should be noted that, if the Linux platform, and opened native epoll, to use EpollEventLoopGroup, this also is to use JNI, tone write c epoll; otherwise, use NioEventLoopGroup Java NIO), the specific code as follows:

public NettyRemotingServer(final NettyServerConfig nettyServerConfig,
        final ChannelEventListener channelEventListener) {
        super(nettyServerConfig.getServerOnewaySemaphoreValue(), nettyServerConfig.getServerAsyncSemaphoreValue());
        this.serverBootstrap = new ServerBootstrap();
        this.nettyServerConfig = nettyServerConfig;
        this.channelEventListener = channelEventListener;
      //省略部分代码
      //初始化时候nThreads设置为1,说明RemotingServer端的Disptacher链接管理和分发请求的线程为1,用于接收客户端的TCP连接
        this.eventLoopGroupBoss = new NioEventLoopGroup(1, new ThreadFactory() {
            private AtomicInteger threadIndex = new AtomicInteger(0);

            @Override
            public Thread newThread(Runnable r) {
                return new Thread(r, String.format("NettyBoss_%d", this.threadIndex.incrementAndGet()));
            }
        });

        /**
         * 根据配置设置NIO还是Epoll来作为Selector线程池
         * 如果是Linux平台,并且开启了native epoll,就用EpollEventLoopGroup,这个也就是用JNI,调的c写的epoll;否则,就用Java NIO的NioEventLoopGroup。
         * 
         */
        if (useEpoll()) {
            this.eventLoopGroupSelector = new EpollEventLoopGroup(nettyServerConfig.getServerSelectorThreads(), new ThreadFactory() {
                private AtomicInteger threadIndex = new AtomicInteger(0);
                private int threadTotal = nettyServerConfig.getServerSelectorThreads();

                @Override
                public Thread newThread(Runnable r) {
                    return new Thread(r, String.format("NettyServerEPOLLSelector_%d_%d", threadTotal, this.threadIndex.incrementAndGet()));
                }
            });
        } else {
            this.eventLoopGroupSelector = new NioEventLoopGroup(nettyServerConfig.getServerSelectorThreads(), new ThreadFactory() {
                private AtomicInteger threadIndex = new AtomicInteger(0);
                private int threadTotal = nettyServerConfig.getServerSelectorThreads();

                @Override
                public Thread newThread(Runnable r) {
                    return new Thread(r, String.format("NettyServerNIOSelector_%d_%d", threadTotal, this.threadIndex.incrementAndGet()));
                }
            });
        }
        //省略部分代码 
复制代码

After NettyRemotingServer instance initialization is complete, we will start it. Server-side before the start-up phase will be a good example of an acceptor threads (eventLoopGroupBoss), N number IO thread (eventLoopGroupSelector), M1 a worker thread (defaultEventExecutorGroup) bind up. The front section has also introduced the role of each of the thread pool. It should be noted here that, Worker threads get the network data on to Netty's ChannelPipeline (which uses Chain of Responsibility design pattern), down from the Executive Head to Tail Handler one by one, these Handler instance is created when the specified NettyRemotingServer . NettyEncoder and NettyDecoder responsible codec data transmission between the network and RemotingCommand. After NettyServerHandler get RemotingCommand decoded according RemotingCommand.type request or response is judged to be processed, according to the service request code encapsulated into different task tasks submitted to the corresponding service processing processor thread pool.

 @Override
    public void start() {
        //默认的处理线程池组,使用默认的处理线程池组用于处理后面的多个Netty Handler的逻辑操作

        this.defaultEventExecutorGroup = new DefaultEventExecutorGroup(
                nettyServerConfig.getServerWorkerThreads(),
                new ThreadFactory() {

                    private AtomicInteger threadIndex = new AtomicInteger(0);

                    @Override
                    public Thread newThread(Runnable r) {
                        return new Thread(r, "NettyServerCodecThread_" + this.threadIndex.incrementAndGet());
                    }
                });
        /**
         * 首先来看下 RocketMQ NettyServer 的 Reactor 线程模型,
         * 一个 Reactor 主线程负责监听 TCP 连接请求;
         * 建立好连接后丢给 Reactor 线程池,它负责将建立好连接的 socket 注册到 selector
         * 上去(这里有两种方式,NIO和Epoll,可配置),然后监听真正的网络数据;
         * 拿到网络数据后,再丢给 Worker 线程池;
         *
         */
        //RocketMQ-> Java NIO的1+N+M模型:1个acceptor线程,N个IO线程,M1个worker 线程。
        ServerBootstrap childHandler =
                this.serverBootstrap.group(this.eventLoopGroupBoss, this.eventLoopGroupSelector)
                        .channel(useEpoll() ? EpollServerSocketChannel.class : NioServerSocketChannel.class)
                        .option(ChannelOption.SO_BACKLOG, 1024)
                        //服务端处理客户端连接请求是顺序处理的,所以同一时间只能处理一个客户端连接,多个客户端来的时候,服务端将不能处理的客户端连接请求放在队列中等待处理,backlog参数指定了队列的大小
                        .option(ChannelOption.SO_REUSEADDR, true)//这个参数表示允许重复使用本地地址和端口
                        .option(ChannelOption.SO_KEEPALIVE, false)//当设置该选项以后,如果在两小时内没有数据的通信时,TCP会自动发送一个活动探测数据报文。
                        .childOption(ChannelOption.TCP_NODELAY, true)//该参数的作用就是禁止使用Nagle算法,使用于小数据即时传输
                        .childOption(ChannelOption.SO_SNDBUF, nettyServerConfig.getServerSocketSndBufSize())//这两个参数用于操作接收缓冲区和发送缓冲区
                        .childOption(ChannelOption.SO_RCVBUF, nettyServerConfig.getServerSocketRcvBufSize())
                        .localAddress(new InetSocketAddress(this.nettyServerConfig.getListenPort()))
                        .childHandler(new ChannelInitializer<SocketChannel>() {
                            @Override
                            public void initChannel(SocketChannel ch) throws Exception {

                                ch.pipeline()
                                        .addLast(defaultEventExecutorGroup, HANDSHAKE_HANDLER_NAME,
                                                new HandshakeHandler(TlsSystemConfig.tlsMode))
                                        .addLast(defaultEventExecutorGroup,
                                                new NettyEncoder(),//rocketmq解码器,他们分别覆盖了父类的encode和decode方法
                                                new NettyDecoder(),//rocketmq编码器
                                                new IdleStateHandler(0, 0, nettyServerConfig.getServerChannelMaxIdleTimeSeconds()),//Netty自带的心跳管理器
                                                new NettyConnectManageHandler(),//连接管理器,他负责捕获新连接、连接断开、异常等事件,然后统一调度到NettyEventExecuter处理器处理。
                                                new NettyServerHandler()//当一个消息经过前面的解码等步骤后,然后调度到channelRead0方法,然后根据消息类型进行分发 
                                        );
                            }
                        });

        if (nettyServerConfig.isServerPooledByteBufAllocatorEnable()) {
            childHandler.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
        }

        try {
            ChannelFuture sync = this.serverBootstrap.bind().sync();
            InetSocketAddress addr = (InetSocketAddress) sync.channel().localAddress();
            this.port = addr.getPort();
        } catch (InterruptedException e1) {
            throw new RuntimeException("this.serverBootstrap.bind().sync() InterruptedException", e1);
        }

        if (this.channelEventListener != null) {
            this.nettyEventExecutor.start();
        }

        //定时扫描responseTable,获取返回结果,并且处理超时
        this.timer.scheduleAtFixedRate(new TimerTask() {

            @Override
            public void run() {
                try {
                    NettyRemotingServer.this.scanResponseTable();
                } catch (Throwable e) {
                    log.error("scanResponseTable exception", e);
                }
            }
        }, 1000 * 3, 1000);
    }
复制代码

From the above description it can be summarized in a block diagram of Reactor Model RPC communication thread pool portion RocketMQ derived.

As can be seen overall RocketMQ RPC communication by means of multi-threading model Netty, the server IO listener thread and the thread separation, while RPC communication business logic layer and a specific business processing thread is further separated. Time controlled simple business directly on the part of RPC communication to complete, complex and time uncontrollable business submitted to the back-end business processing thread pool, which improves communication efficiency and overall performance of MQ. (Ps: abstract NioEventLoop which represent a continuous cycle to perform thread processing tasks, each NioEventLoop there is a selector, for monitoring binding in its socket on the link.)

note: now is very painful, and so look back over time around, will find that things that are not.

Reproduced in: https: //juejin.im/post/5cf11748f265da1bb31c1ff1

Guess you like

Origin blog.csdn.net/weixin_33984032/article/details/91427445