Netty connection handle those things

Editor's Note: Netty is the famous Java open source network library field, is characterized by high performance and scalability, many popular frameworks are based on it to build, such as our well-known Dubbo, Rocketmq, Hadoop, etc., for high performance RPC are generally constructed based Netty, such as soft-bolt. In a word, Java and requires little friends need to learn to use and understand Netty implementation principle.
Netty explanation regarding entry may refer to: Netty started, this article is enough

Netty connection process is to deal with IO events, IO events include reading event, ACCEPT events, write events and OP_CONNECT events.

Event processing is a combination ChanelPipeline IO to do, an IO event arrival, the read and write operations, and then to subsequent processing ChannelPipeline, ChannelPipeline contains ChannelHandler chain (head + custom channelHandler + tail).
Use channelPipeline and channelHandler mechanism, has played a role in decoupling and scalable. An IO processing events, comprising a plurality of process flow, the flow of processing in exactly correspond channelPipeline channelHandler. If there is new demand for data processing, then add channelHandler added to channelPipeline, so to achieve it 6, after write their own code can refer to.

Here, in order to meet the general requirements of expansion, we used two modes:

  • Method Pattern mode : each template defines a main flow, and leaves the corresponding hook method, scalable.
  • Chain of Responsibility pattern : serial mode, can be dynamically added chain and a corresponding number of callback method.

netty of channelHandlerthe channelPipelineunderstandable achievement is the responsibility chain mode, by dynamically increasing channelHandler can achieve high scalability and reuse purposes.

We need to know before the next NioEventLoop model to understand netty connection handling mechanism, which handles the connection event of the organization chart below:

Processing logic corresponding source code for:

// 处理各种IO事件
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
    final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();

    try {
        int readyOps = k.readyOps();
        if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
            // OP_CONNECT事件,client连接上客户端时触发的事件
            int ops = k.interestOps();
            ops &= ~SelectionKey.OP_CONNECT;
            k.interestOps(ops);
            unsafe.finishConnect();
        }

        if ((readyOps & SelectionKey.OP_WRITE) != 0) {
            ch.unsafe().forceFlush();
        }

        if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
            // 注意,这里读事件和ACCEPT事件对应的unsafe实例是不一样的
            // 读事件 -> NioByteUnsafe,  ACCEPT事件 -> NioMessageUnsafe
            unsafe.read();
        }
    } catch (CancelledKeyException ignored) {
        unsafe.close(unsafe.voidPromise());
    }
}

From the above code point of view, the event is divided into three kinds, namely OP_CONNECT event, write and read events events (including events ACCEPT). The following is divided into three parts unfold:

ACCEPT event

// NioMessageUnsafe
public void read() {
    assert eventLoop().inEventLoop();
    final ChannelConfig config = config();
    final ChannelPipeline pipeline = pipeline();
    final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
    allocHandle.reset(config);
 
    boolean closed = false;
    Throwable exception = null;
    try {
        do {
            // 调用java socket的accept方法,接收请求
            int localRead = doReadMessages(readBuf);
            // 增加统计计数
            allocHandle.incMessagesRead(localRead);
        } while (allocHandle.continueReading());
    } catch (Throwable t) {
        exception = t;
    }
 
    // readBuf中存的是NioChannel
    int size = readBuf.size();
    for (int i = 0; i < size; i ++) {
        readPending = false;
        // 触发fireChannelRead
        pipeline.fireChannelRead(readBuf.get(i));
    }
    readBuf.clear();
    allocHandle.readComplete();
    pipeline.fireChannelReadComplete();
}

channel connection is established good connections in relation to the registration of a NIOEventLoop workGroup the selector, the registration operation is done in fireChannelRead, this is a logic in ServerBootstrapAcceptor.channelRead in.

// ServerBootstrapAcceptor
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    final Channel child = (Channel) msg;
 
    // 设置channel的pipeline handler,及channel属性
    child.pipeline().addLast(childHandler);
    setChannelOptions(child, childOptions, logger);
 
    for (Entry<AttributeKey<?>, Object> e: childAttrs) {
        child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
    }
 
    try {
        // 将channel注册到childGroup中的Selector上
        childGroup.register(child).addListener(new ChannelFutureListener() {
            @Override
            public void operationComplete(ChannelFuture future) throws Exception {
                if (!future.isSuccess()) {
                    forceClose(child, future.cause());
                }
            }
        });
    } catch (Throwable t) {
        forceClose(child, t);
    }
}

READ EVENTS

// NioByteUnsafe
public final void read() {
    final ChannelConfig config = config();
    final ChannelPipeline pipeline = pipeline();
    final ByteBufAllocator allocator = config.getAllocator();
    final RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();
    allocHandle.reset(config);
 
    ByteBuf byteBuf = null;
    boolean close = false;
    try {
        do {
            byteBuf = allocHandle.allocate(allocator);
            // 从channel中读取数据,存放到byteBuf中
            allocHandle.lastBytesRead(doReadBytes(byteBuf));
 
            allocHandle.incMessagesRead(1);
            readPending = false;
 
            // 触发fireChannelRead
            pipeline.fireChannelRead(byteBuf);
            byteBuf = null;
        } while (allocHandle.continueReading());

        // 触发fireChannelReadComplete,如果在fireChannelReadComplete中执行了ChannelHandlerContext.flush,则响应结果返回给客户端
        allocHandle.readComplete();
        // 触发fireChannelReadComplete
        pipeline.fireChannelReadComplete();
 
        if (close) {
            closeOnRead(pipeline);
        }
    } catch (Throwable t) {
        if (!readPending && !config.isAutoRead()) {
            removeReadOp();
        }
    }
}

Write event

Normally write event generally not registered, and if the transmit buffer Socket no free memory, write membership cause blocking, a time register write event, when there is free memory (greater than or equal to the number of bytes available when the low-water mark), and then write in response to events, and triggers the corresponding callback.

if ((readyOps & SelectionKey.OP_WRITE) != 0) {
    // 写事件,从flush操作来看,虽然之前没有向socket缓冲区写数据,但是已经写入到
    // 了chnanel的outboundBuffer中,flush操作是将数据从outboundBuffer写入到
    // socket缓冲区
    ch.unsafe().forceFlush();
}

CONNECT event

The client event is triggered by the initiative to establish a connection on this side of the trigger.

if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
    // OP_CONNECT事件,client连接上客户端时触发的事件
    int ops = k.interestOps();
    ops &= ~SelectionKey.OP_CONNECT;
    k.interestOps(ops);
 
    // 触发finishConnect事件,其中就包括fireChannelActive事件,如果有自定义的handler有channelActive方法,则会触发
    unsafe.finishConnect();
}

Recommended Reading

Welcome to a small partner concerned [TopCoder] Read more exciting good text.

Guess you like

Origin www.cnblogs.com/luoxn28/p/11839273.html