The main artery through ChannelPipeline ---- io event processing

ChannelPipeline throughout io event handling aorta

Previous, we analyze the main logic code NioEventLoop and related classes, we know netty threading closed manner to avoid competition for resources among multiple threads to minimize concurrency problems, reduce the use of locks, which can effectively reduce the cost of thread switching, reduce cpu usage time. In addition, we have a simple analysis for packaging EventLoopGroup netty thread group currently commonly used roundRobin uniformly distributed manner on a plurality of channel threads. By analyzing the previous articles, we have to initialize the channel is registered on EventLoop, process and thread startup code logic thread running SingleThreadEventLoop of some idea, in addition we also analyzed for process-based io TCP protocol specific circular logic of events NioEventLoop class, through detailed analysis of the code, we learned to connect, write, read, accept different processing logic of events, but for the processing logic write and read events we did not analyze in great detail because the processing of these events relate to understanding another netty very important module, ChannelPipeline and a series of related classes such as Channel, ChannelHandler, ChannelhandlerContext, etc., netty event handling with a classic chain of responsibility (responsbility chain) of design patterns, this model makes netty design of io event processing framework is easy to extend, and provides a good model for the abstract business logic, greatly reduces the difficulty of using netty, making the process io events become more in line with the habit of thinking .
Well, so much nonsense, in fact, is trying to analyze the main front of several articles to make a summary and review, and then leads to the theme of --netty Benpian io event processing chain.
Because the code structure netty relatively still very neat, border demarcation between its modules clearer, EventLoop as the "birthplace" io events, objects and their interaction is Channel class, and ChannelPipeline, ChannelhandlerContext, ChannelHandler and several other category is interacting with the Channel, they do not interact directly with the EventLoop.

FIG configuration ChannelPipeline

First, each Channel initialization time will create a ChannelPipeline, which we analyze the initialization NioSocketChannel also analyzed to the front. Currently ChannelPipeline implementation DefaultChannelPipeline only one of DefaultChannelPipeline so we have to be analyzed. Internal DefaultChannelPipeline have a doubly linked list structure, each node in the list is a AbstractChannelHandlerContext types of nodes, it will create two initial nodes DefaultChannelPipeline just initialization, respectively HeadContext and TailContext, these two nodes is not completely mark node, they have their real role,

  • HeadContext, implements several methods bind, connect, disconnect, close, write, flush, etc., are basically achieved through the relevant method of directly calling unsafe. And basically one node to the next event to pass through the fire to other methods of method calls AbstractChannelHandlerContext.
  • TailContext, mainly for processing write data hardly achieve any logic, which features almost all inherited from AbstractChannelHandlerContext, while AbstractChannelHandlerContext for most event handling is achieved simply transfer the event to the next node. Note that, where the next node is not necessarily a front or a rear, according to the specific type of event or the specific operation may be, for ChannelOutboundInvoker interface methods are transmitted to the first node of the event from the tail node, and for ChannelInboundInvoker interface methods are transmitted from a first node to the end node. We can vividly understood as the first node is closest to the socket, and the tail node is the principle socket, so when the data come in, read event generated from the first node of the first to start passing backwards, when data write operation when, will pass from the tail node to the head node.

Below, we read two of the most important events events and write events to analyze this chaining structure netty in the end is how operation.

Reading event

First, we need to find an example of a reading event and call the relevant method makes reading event begins delivery of produce, it is natural that we should think in EventLoop will produce a reading event.
The following is NioEventLoop in to read event processing method by calling NioUnsafe.read

       // 处理read和accept事件
        if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
            unsafe.read();
        }

We continue to look NioByteUnsafe.read method, which we previously mentioned in the analysis NioEventLoop event processing logic, this method will first allocate a buffer through the buffer distributor, and from the channel (ie socket) will read data buffer each read a buffer, it will trigger a read event, we look at specific trigger read event call:

            do {
                // 分配一个缓冲
                byteBuf = allocHandle.allocate(allocator);
                // 将通道的数据读取到缓冲中
                allocHandle.lastBytesRead(doReadBytes(byteBuf));
                // 如果没有读取到数据,说明通道中没有待读取的数据了,
                if (allocHandle.lastBytesRead() <= 0) {
                    // nothing was read. release the buffer.
                    // 因为没读取到数据,所以应该释放缓冲
                    byteBuf.release();
                    byteBuf = null;
                    // 如果读取到的数据量是负数,说明通道已经关闭了
                    close = allocHandle.lastBytesRead() < 0;
                    if (close) {
                        // There is nothing left to read as we received an EOF.
                        readPending = false;
                    }
                    break;
                }

                // 更新Handle内部的簿记量
                allocHandle.incMessagesRead(1);
                readPending = false;
                // 向channel的处理器流水线中触发一个事件,
                // 让取到的数据能够被流水线上的各个ChannelHandler处理
                pipeline.fireChannelRead(byteBuf);
                byteBuf = null;
                // 这里根据如下条件判断是否继续读:
                // 上一次读取到的数据量大于0,并且读取到的数据量等于分配的缓冲的最大容量,
                // 此时说明通道中还有待读取的数据
            } while (allocHandle.continueReading());

For the integrity of the code logic, I have here the whole cycle of the code Tieshanglai, in fact, we are concerned only pipeline.fireChannelRead (byteBuf) this one, well, now we find ChannelPipeline entrance read methods of triggering events, we shun with this method, we can follow it step by step to sort out the transfer process of the event.

DefaultChannelPipeline.fireChannelRead

If we look at ChannelPipeline interfaces, there's method names are beginning to fire, the actual just want to express these methods are triggered by an event, then this event will be delivered in the processor's internal list.
We see here a static method call, and with the head node as a parameter, that event delivery starting from scratch node.

public final ChannelPipeline fireChannelRead(Object msg) {
    AbstractChannelHandlerContext.invokeChannelRead(head, msg);
    return this;
}

AbstractChannelHandlerContext.invokeChannelRead(final AbstractChannelHandlerContext next, Object msg)

We can see, this method by calling invokeChannelRead execute the processing logic

static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) {
    // 维护引用计数,主要是为了侦测资源泄漏问题
    final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next);
    EventExecutor executor = next.executor();
    if (executor.inEventLoop()) {
        // 调用invokeChannelRead执行处理逻辑
        next.invokeChannelRead(m);
    } else {
        executor.execute(new Runnable() {
            @Override
            public void run() {
                next.invokeChannelRead(m);
            }
        });
    }
}

AbstractChannelHandlerContext.invokeChannelRead(Object msg)

Here you can see, AbstractChannelHandlerContext to implement the logic of data read by the object handler own internal. It also reflects the role ChannelHandlerContext throughout the structure, in fact, it is to play the role of a middleman between ChannelPipeline and handler, then we have to ask: Since ChannelHandlerContext can not afford any substantial role, why should this one more than the middle layer of it, what is the benefits of this design? I believe that this is in fact designed to make the greatest possible detail on the user shielding netty frame, imagine if there is no intermediate role in this context, the user is bound to understand ChannelPipeline detail, and also consider the event delivery is to find a node, also under consideration should look for a node or flashbacks find along the chain along the positive sequence of the list, so here ChannelHandlerContext role I think the biggest role is to encapsulate logical linked list, and the mode of transmission of different types of packaging operations. Of course, also played some role passed by reference, such as channel reference can be delivered to users Introduction.
Well, back to the topic, from the previous method, we know that the event was first read from HeadContext node starts, so we look at channelRead method HeadContext (since HeadContext also realized the handler method, and the return is itself)

private void invokeChannelRead(Object msg) {
    // 如果这个handler已经准备就绪,那么就执行处理逻辑
    // 否则将事件传递给下一个处理器节点
    if (invokeHandler()) {
        try {
            // 调用内部的handler的channelRead方法
            ((ChannelInboundHandler) handler()).channelRead(this, msg);
        } catch (Throwable t) {
            notifyHandlerException(t);
        }
    } else {
        fireChannelRead(msg);
    }
}

HeadContext.channelRead

Call here is also an important point of note here is called ChannelHandlerContext.fireChannelRead method, this is the event propagation method, method of action is to fire at the beginning of the current operation (or called events) from the current passed to the next process node processing node. This realization of the propagation of an event in the list.

   public void channelRead(ChannelHandlerContext ctx, Object msg) {
        ctx.fireChannelRead(msg);
    }

summary

Let's pause here, to sum up the event propagation mechanism to read (or read) inside ChannelPipeline, in fact, very simple,

  • First, the external caller will eventually call ChannelPipeline.fireChannelRead method unsafe, from the read channel to the data passed in as a parameter
  • In the first node as calling the static method AbstractChannelHandlerContext.fireChannelRead
  • Then the first node HeadContext start calling method invokeChannelRead node (i.e. invokeChannelRead method of ChannelHandlerContext),
  • Method invokeChannelRead channelRead method calls the current node object handler performs the processing logic
  • channelRead method handler object can call AbstractChannelHandlerContext.fireChannelRead this event passed to the next node
  • This event will be able to continue to pass along the chain continues, of course, if the business process needs, can be terminated in the event of a transfer node, that is, the node does not call ChannelHandlerContext.fireChannelRead

Write event

In addition, we analyze the data write operation is how spread. When analyzing the data write operation entrance does not want to read events so good looking, user data write code in netty ultimately be placed in the internal buffer, when NioEventLoop in the bottom of the monitor to the socket can write event data, the actual it is the current transmit data buffer to the socket, while the user is concerned, it is not in contact with this layer socketChannel.
According to the previous analysis, we know that users generally the Channel, ChannelHandler, ChannelhandlerContext dealing with these types of classes, write data to the write operation is through Channel and writeAndFlush triggered two methods difference is that after writing data also writeAndFlush brush triggers a write operation, data is actually written into the buffer in the socket.

AbstractChannel.write

The operation is still to ChannelPipeline internal trigger pipelining

public ChannelFuture write(Object msg, ChannelPromise promise) {
    return pipeline.write(msg, promise);
}

DefaultChannelPipeline.write

Here you can clearly see, the data write operation from the start node, but TailContext not override the write method, the final call is the appropriate method of AbstractChannelHandlerContext.
We walked down the call chain, found that the method is actually a series of write write operation is passed to the next ChannelOutboundHandler type of processing nodes, note that this is looking forward from the tail node, traversing the list in the order and reading data just the opposite.
True calling

public final ChannelFuture write(Object msg, ChannelPromise promise) {
    return tail.write(msg, promise);
}

AbstractChannelHandlerContext.write

From this method can be clearly seen, write method to write a next ChannelOutboundHandler type of processor nodes.

private void write(Object msg, boolean flush, ChannelPromise promise) {
    ObjectUtil.checkNotNull(msg, "msg");
    try {
        if (isNotValidPromise(promise, true)) {
            ReferenceCountUtil.release(msg);
            // cancelled
            return;
        }
    } catch (RuntimeException e) {
        ReferenceCountUtil.release(msg);
        throw e;
    }

    // 沿着链表向前遍历,找到下一个ChannelOutboundHandler类型的处理器节点
    final AbstractChannelHandlerContext next = findContextOutbound(flush ?
            (MASK_WRITE | MASK_FLUSH) : MASK_WRITE);
    final Object m = pipeline.touch(msg, next);
    EventExecutor executor = next.executor();
    if (executor.inEventLoop()) {
        if (flush) {
            // 调用AbstractChannelHandlerContext.invokeWriteAndFlush方法执行真正的写入逻辑
            next.invokeWriteAndFlush(m, promise);
        } else {
            next.invokeWrite(m, promise);
        }
    } else {
        // 如果当前是异步地写入数据,那么需要将写入的逻辑封装成一个任务添加到EventLoop的任务对队列中
        final AbstractWriteTask task;
        if (flush) {
            task = WriteAndFlushTask.newInstance(next, m, promise);
        }  else {
            task = WriteTask.newInstance(next, m, promise);
        }
        if (!safeExecute(executor, task, promise, m)) {
            // We failed to submit the AbstractWriteTask. We need to cancel it so we decrement the pending bytes
            // and put it back in the Recycler for re-use later.
            //
            // See https://github.com/netty/netty/issues/8343.
            task.cancel();
        }
    }
}

AbstractChannelHandlerContext.invokeWrite

We then see invokeWrite0 method

private void invokeWrite(Object msg, ChannelPromise promise) {
    if (invokeHandler()) {
        invokeWrite0(msg, promise);
    } else {
        write(msg, promise);
    }
}

AbstractChannelHandlerContext.invokeWrite0

Here you can clearly see that ultimately calls the write method handler to perform real programming logic, this logic actually have your own implementation.

private void invokeWrite0(Object msg, ChannelPromise promise) {
    try {
        // 调用当前节点的handler的write方法执行真正的写入逻辑
        ((ChannelOutboundHandler) handler()).write(this, msg, promise);
    } catch (Throwable t) {
        notifyOutboundHandlerException(t, promise);
    }
}

Here, we already know how the operation is written from the beginning of the end node, but also know the write operation can be passed to the next node by AbstractChannelHandlerContext.write method called the current processing node, then the data transfer after the layers, the final It is how it is written to the socket? Answer this question, we need to look at HeadContext code! We know that the write operation is passed forward from the tail node, the last node is the first node HeadContext passed.

HeadContext.write

The final call unsafe.write method.
In the implementation of AbstractChannel.AbstractUnsafe, write method of the processor after the previous series of processed data stored in the internal buffer.

    public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {
        unsafe.write(msg, promise);
    }

Brush transfer write operation

We mentioned earlier, in addition to writing data write operation there writeAndFlush, this operation is in addition to writing data, it will be followed by the implementation of a brush write operation. Brush write operation is also transmitted from the tail node forward final delivery head node HeadContext, wherein the flush follows:

    public void flush(ChannelHandlerContext ctx) {
        unsafe.flush();
    }

In the implementation AbstractChannel.AbstractUnsafe, the data previously stored in the action will flush the internal buffer is sucked into the socket, thereby completing the writing brush.

to sum up

In this section, we mainly through io event processing two most important event, that is, reading and writing events events as the starting point for a detailed analysis of the treatment of these two events netty in. Which write data events and our previous impression of the differences established in jdk nio in or not, are the data read from the socket for processing, but write an event with the concept jdk nio there is a big difference because netty to write data to do a lot of changes and optimization, the method of writing data related to user code calls by channel, this method will trigger all the relevant data processor treats written on the processor chain were process, the last written in the buffer channel inside the head node HeadCOntext by operation of the flush write data buffer of the socket.
There is the most important and most worth learning point is that the chain of responsibility pattern, obviously, this is a successful application of the chain of responsibility pattern is a scalable framework greatly enhanced, and user-oriented interface easier to understand, simple easy to use, the user is shielded most of the framework of the implementation details.

Guess you like

Origin www.cnblogs.com/zhuge134/p/11105485.html