netty4源码阅读与分析----服务端如何处理请求

接上一篇,当服务端接收到客户端的连接请求后(accept事件),服务端发现我们感兴趣的accept事件到了(这个过程就是NioEventLoop的run方法在不断的轮寻),交由processSelectedKeys方法处理,它会调用到这里:
if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
                unsafe.read();
            }
我们看下unsafe.read方法:
public void read() {
            assert eventLoop().inEventLoop();
            final ChannelConfig config = config();
            final ChannelPipeline pipeline = pipeline();
            final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
            allocHandle.reset(config);
            boolean closed = false;
            Throwable exception = null;
            try {
                try {
                    do {
                        int localRead = doReadMessages(readBuf);//readBuf = new ArrayList<Object>();
                        if (localRead == 0) {
                            break;
                        }
                        if (localRead < 0) {
                            closed = true;
                            break;
                        }
                        allocHandle.incMessagesRead(localRead);
                    } while (allocHandle.continueReading());
                } catch (Throwable t) {
                    exception = t;
                }
                int size = readBuf.size();
                for (int i = 0; i < size; i ++) {
                    readPending = false;
                    pipeline.fireChannelRead(readBuf.get(i));//这里的pipeline是NioServerSocketChannel对应的pipeline
                }
                readBuf.clear();
                allocHandle.readComplete();
                pipeline.fireChannelReadComplete();
                .....
            } finally {
               ......
            }
        }
首先看下doReadMessages:
protected int doReadMessages(List<Object> buf) throws Exception {
        SocketChannel ch = SocketUtils.accept(javaChannel());
        try {
            if (ch != null) {
                buf.add(new NioSocketChannel(this, ch));
                return 1;
            }
        } catch (Throwable t) {
           ......
        }
        return 0;
    }

服务端接受到accept后,建立了一个新的通道SocketChannel,然后分配了新的NioSocketChannel实例,它包含了NioServerSocketChannel实例和当前建立的连接ch,然后放入到readbuf中,接下来会调用pipeline.fireChannelRead(readBuf.get(i)),该方法会调用到ServerBootstrapAcceptor.channelRead方法:

public void channelRead(ChannelHandlerContext ctx, Object msg) {
            final Channel child = (Channel) msg;
            child.pipeline().addLast(childHandler);//这里的childHandler即上文中的标记"ChannelHandler-ChannelInitializer-1",
	    //添加后此时pipeline维护的链表为HeadContext<--->ChannelHandler-ChannelInitializer-1<--->TailContext,注意这里的pipeline是新建立的链接对应的pipeline,与NioServerSocketChannel的pipeline不是同一个
            setChannelOptions(child, childOptions, logger);
            for (Entry<AttributeKey<?>, Object> e: childAttrs) {
                child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
            }
            try {
                childGroup.register(child).addListener(new ChannelFutureListener() {
                    @Override
                    public void operationComplete(ChannelFuture future) throws Exception {
                        if (!future.isSuccess()) {
                            forceClose(child, future.cause());
                        }
                    }
                });
            } catch (Throwable t) {
                forceClose(child, t);
            }
        }

这里我们主要看下register方法,不同的是这里的register任务是在worker线程(本人机器默认16个线程)中执行的,这里的register任务与服务端启动的时候执行的代码一样,不同的是它是向NioSocketChannel通道注册感兴趣的事件(read)。接下来回到unsafe.read,看下一行代码:

pipeline.fireChannelReadComplete()

这里实际上是再次确保服务端的通道中感兴趣的事件是accept,至此,链接算是建立起来了,接下来我们来看下是如何处理read事件的,其实处理读事件和accept事件是一样的代码入口,不同的是这里的unsafe变了,调用的是NioByteUnsafe.read:

public final void read() {
            final ChannelConfig config = config();
            if (shouldBreakReadReady(config)) {
                clearReadPending();
                return;
            }
            final ChannelPipeline pipeline = pipeline();
            final ByteBufAllocator allocator = config.getAllocator();
            final RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();
            allocHandle.reset(config);
            ByteBuf byteBuf = null;
            boolean close = false;
            try {
                do {
                    byteBuf = allocHandle.allocate(allocator);
                    allocHandle.lastBytesRead(doReadBytes(byteBuf));//将数据读到byteBuf中
                    if (allocHandle.lastBytesRead() <= 0) {
                        // nothing was read. release the buffer.
                        byteBuf.release();
                        byteBuf = null;
                        close = allocHandle.lastBytesRead() < 0;
                        if (close) {
                            // There is nothing left to read as we received an EOF.
                            readPending = false;
                        }
                        break;
                    }

                    allocHandle.incMessagesRead(1);
                    readPending = false;
                    pipeline.fireChannelRead(byteBuf);//这里会调用到EchoServerHandler的channelRead方法,打印出信息
                    byteBuf = null;
                } while (allocHandle.continueReading());

                allocHandle.readComplete();
                pipeline.fireChannelReadComplete();

                if (close) {
                    closeOnRead(pipeline);
                }
            } catch (Throwable t) {
                handleReadException(pipeline, byteBuf, t, close, allocHandle);
            } finally {
               ......
            }
        }

从这里可以知道,原来socket I/O接受到数据后,是交给我们自定义的handler来处理(业务逻辑处理),下面总结一下:

1,服务端接收到accept后,建立一个新的通道NioSocketChannel,然后启动一个worker线程,处理这个通道注册任务,这里感兴趣的事件是read。这里我们默认的是16个worker,如果请求连接数非常多,那么一个worker可能处理多个新建立通道的注册任务。

2,客户端发送一段信息过来,worker线程在不断轮寻中发现read事件到了,首先读取内容到byteBuf中,然后调用业务程序定义的handler处理读到的内容。

猜你喜欢

转载自blog.csdn.net/chengzhang1989/article/details/80365015