前言
在 Netty源码分析(2)-服务端启动流程 分析 ServerSocketChannel
绑定监听端口流程的末尾,我们已经提到了在绑定操作完成后, Netty 会提交异步任务调用 pipeline.fireChannelActive()
通知 Channel 已经激活,从而回调处理器中的 channelActive()
方法,而在这个过程中服务端 Channel 监听的事件将被更新为 SelectionKey.OP_ACCEPT
。因此本文将服务端新连接建立分为了两个步骤,其流程如图所示
- 监听事件 SelectionKey.OP_ACCEPT 的设置
- MainReactor 建立的新连接在 SubReactor 上的注册
1. 监听事件 SelectionKey.OP_ACCEPT 的设置
-
AbstractUnsafe#bind()
在完成ServerSocketChannel
绑定监听服务器端口后,此时 Channel 已经是Active
状态,则会提交一个通知 Channel 激活的异步任务到事件循环线程中public final void bind(final SocketAddress localAddress, final ChannelPromise promise) { assertEventLoop(); ...... boolean wasActive = isActive(); try { doBind(localAddress); } catch (Throwable t) { safeSetFailure(promise, t); closeIfClosed(); return; } if (!wasActive && isActive()) { invokeLater(new Runnable() { @Override public void run() { pipeline.fireChannelActive(); } }); } safeSetSuccess(promise); }
-
异步任务入队并调度执行的过程参考流程图即可,此处不作具体分析。我们知道 Netty 中对数据的处理依赖业务组件来完成,这个激活通知的异步任务最后会调用到
DefaultChannelPipeline#fireChannelActive()
方法public final ChannelPipeline fireChannelActive() { AbstractChannelHandlerContext.invokeChannelActive(head); return this; }
-
以上步骤最后调用到
HeadContext#channelActive()
方法,开始进入 Handler 处理器双向链路。可以看到这个方法中主要做了两件事:- ctx.fireChannelActive() 调用通知方法,将 Channel 激活的事件通知到下一个处理器,从而回调下一个处理器的 channelActive() 方法
- readIfIsAutoRead() 根据自动读配置决定是否开始自动读取数据,默认是自动读,则会调用到 Channel 的 read 方法
public void channelActive(ChannelHandlerContext ctx) throws Exception { ctx.fireChannelActive(); readIfIsAutoRead(); } private void readIfIsAutoRead() { if (channel.config().isAutoRead()) { channel.read(); } }
-
Channel
中的读数据其实还是依赖DefaultChannelPipeline#read()
,最后会取 Pipeline 中双向处理器链表的尾节点开始做读取操作public final ChannelPipeline read() { tail.read(); return this; }
-
TailContext#read()
方法主要逻辑为通过findContextOutbound()
方法从链表尾部找到一个处理出站事件的Context,也就是HeadContext
,调用其内封装的 Handler 的read()
方法public ChannelHandlerContext read() { final AbstractChannelHandlerContext next = findContextOutbound(); EventExecutor executor = next.executor(); if (executor.inEventLoop()) { next.invokeRead(); } else { Runnable task = next.invokeReadTask; if (task == null) { next.invokeReadTask = task = new Runnable() { @Override public void run() { next.invokeRead(); } }; } executor.execute(task); } return this; }
-
HeadContext#read()
方法逻辑很少,其实就是调用Unsafe#beginRead()
方法,这个方法的实现为AbstractUnsafe#beginRead()
,最后调用到AbstractNioChannel#doBeginRead()
方法。可以看到AbstractNioChannel#doBeginRead()
内部的逻辑也比较简练,主要做的就是通过AbstractNioChannel
保存的内部属性readInterestOp
修改监听事件的操作位,而在服务端启动的时候这个属性的值被设置为SelectionKey.OP_ACCEPT
,也就是说通过当前步骤服务端开始监听新建连接事件了protected void doBeginRead() throws Exception { // Channel.read() or ChannelHandlerContext.read() was called final SelectionKey selectionKey = this.selectionKey; if (!selectionKey.isValid()) { return; } readPending = true; final int interestOps = selectionKey.interestOps(); if ((interestOps & readInterestOp) == 0) { selectionKey.interestOps(interestOps | readInterestOp); } }
2. 新连接在 SubReactor 上的注册
-
处理 Eevnt 事件的核心在于
NioEventLoop
,NioEventLoop#run()
开启了一个 for 空循环,其中NioEventLoop#processSelectedKeys()
用于处理 Channel 上所有相关的事件,最终对应一个事件的处理方法是NioEventLoop#processSelectedKey()
,对于SelectionKey.OP_ACCEPT
事件会调用unsafe.read()
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) { final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe(); ...... try { int readyOps = k.readyOps(); // We first need to call finishConnect() before try to trigger a read(...) or write(...) as otherwise // the NIO JDK channel implementation may throw a NotYetConnectedException. if ((readyOps & SelectionKey.OP_CONNECT) != 0) { // remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking // See https://github.com/netty/netty/issues/924 int ops = k.interestOps(); ops &= ~SelectionKey.OP_CONNECT; k.interestOps(ops); unsafe.finishConnect(); } // Process OP_WRITE first as we may be able to write some queued buffers and so free memory. if ((readyOps & SelectionKey.OP_WRITE) != 0) { // Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write ch.unsafe().forceFlush(); } // Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead // to a spin loop if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) { unsafe.read(); } } catch (CancelledKeyException ignored) { unsafe.close(unsafe.voidPromise()); } }
-
unsafe.read()
是接口调用,其服务端实现为NioMessageUnsafe#read()
,这个方法做的比较重要的事情如下:- doReadMessages() 接口的实现 NioServerSocketChannel#doReadMessages() 调用 JDK 内置接口 Accept 连接,并将其包装成一个监听事件操作位为
SelectionKey.OP_READ
的NioSocketChannel
对象返回,由此新连接建立 - pipeline.fireChannelRead(readBuf.get(i)) 将新建立的 NioSocketChannel 入参,调用业务处理组件对其进行处理,最后调用到服务端内置处理器
ServerBootstrapAcceptor#channelRead()
将新建连接从 MainReactor 注册到 SubReactor 上
public void read() { assert eventLoop().inEventLoop(); ...... try { try { do { int localRead = doReadMessages(readBuf); if (localRead == 0) { break; } if (localRead < 0) { closed = true; break; } allocHandle.incMessagesRead(localRead); } while (allocHandle.continueReading()); } catch (Throwable t) { exception = t; } int size = readBuf.size(); for (int i = 0; i < size; i ++) { readPending = false; pipeline.fireChannelRead(readBuf.get(i)); } readBuf.clear(); allocHandle.readComplete(); pipeline.fireChannelReadComplete(); if (exception != null) { closed = closeOnReadError(exception); pipeline.fireExceptionCaught(exception); } if (closed) { inputShutdown = true; if (isOpen()) { close(voidPromise()); } } } finally { // Check if there is a readPending which was not processed yet. // This could be for two reasons: // * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method // * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method // // See https://github.com/netty/netty/issues/2254 if (!readPending && !config.isAutoRead()) { removeReadOp(); } } }
- doReadMessages() 接口的实现 NioServerSocketChannel#doReadMessages() 调用 JDK 内置接口 Accept 连接,并将其包装成一个监听事件操作位为
-
新连接建立及业务处理组件中对新连接的处理调用流程可根据以上流程图自行了解,此处不作分析,我们只关注最核心的
ServerBootstrapAcceptor#channelRead()
。可以看到这个方法中使用了ServerBootstrap
保存的 SubReactor 的配置选项、处理器及其NioEventLoopGroup
实例,通过childGroup
引用将新建的 Channel 注册到 SubReactor 上public void channelRead(ChannelHandlerContext ctx, Object msg) { final Channel child = (Channel) msg; child.pipeline().addLast(childHandler); setChannelOptions(child, childOptions, logger); for (Entry<AttributeKey<?>, Object> e: childAttrs) { child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue()); } try { childGroup.register(child).addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { forceClose(child, future.cause()); } } }); } catch (Throwable t) { forceClose(child, t); } }
-
childGroup.register(child)
注册的流程与 Netty源码分析(2)-服务端启动流程 中提到的注册流程几无二致,其过程中会包括 SubReactor 事件循环线程的创建及启动,以及 Channel 中 Pipleline 的创建和配置。当 Channel 注册完毕又会调用pipeline.fireChannelActive()
将 Channel 激活的事件通知出来,则本文第一部分中 Channel 上监听事件操作位的修改被触发,只不过此时设置的监听事件操作位为SelectionKey.OP_READ