Parse source Netty series - connected to the access client and the read I / O parse

Foreword

     Previous chapter "Netty source analytic series - the server startup process to resolve" We completed the server started, the server startup is complete, client access and read I / O events is how where to start? And netty 's boss thread receives a client TCP connection requests on how to register to link worker thread pool? With these questions, we begin to read and write the client connected to the access I / O resolution.

1.NioEventLoop run () start

    processSelectedKeys();
复制代码
private void processSelectedKeys() {
    if (selectedKeys != null) {
        processSelectedKeysOptimized(selectedKeys.flip());
    } else {
        processSelectedKeysPlain(selector.selectedKeys());
    }
}
复制代码

     The selectedKeys is empty, it is determined whether to adopt optimized selectedKeys , proceeds to processSelectedKeysOptimized .

private void processSelectedKeysOptimized(SelectionKey[] selectedKeys) {
    for (int i = 0;; i ++) {
        final SelectionKey k = selectedKeys[i];
        if (k == null) {
            break;
        }
        selectedKeys[i] = null;

        final Object a = k.attachment();

        if (a instanceof AbstractNioChannel) {
               processSelectedKey(k, (AbstractNioChannel) a);
        } else {
               @SuppressWarnings("unchecked")
               NioTask<SelectableChannel> task = (NioTask<SelectableChannel>) a;
               processSelectedKey(k, task);
        }
            ...
}
}
复制代码

     k.attachment () to obtain additional objects, then we are where additional up it? Previous "Netty Source resolve - resolve the server startup process" when registering attach objects up, in fact, NioServerSocketChannel itself.

@Override
protected void doRegister() throws Exception {
    boolean selected = false;
    for (;;) {
	...
	selectionKey = javaChannel().register(eventLoop().selector, 0, this);
	...        
    }
}
复制代码

     We go back to k.attachment () , after removing additional objects, to determine whether the type AbstractNioChannel , we can see from here, not the additional AbstractNioChannel type, then that additional NioTask objects, where we look on AbstractNioChannel of, into processSelectedKey () method.

private static void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
    final NioUnsafe unsafe = ch.unsafe();
    ...
    int readyOps = k.readyOps();
    if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
         unsafe.read();
    if (!ch.isOpen()) {
         return;
    }
    if ((readyOps & SelectionKey.OP_WRITE) != 0) {
          ch.unsafe().forceFlush();
    }
    if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
          int ops = k.interestOps();
          ops &= ~SelectionKey.OP_CONNECT;
          k.interestOps(ops);
          unsafe.finishConnect();
    }
    ...
}
复制代码

     If the operation is a read operation or connecting operation proceeds unsafe.read () , there are two classes implement this method, one AbstractNioByteChannel inner class NioByteUnsafe , one AbstractNioMessageChannel inner class NioMessageUnsafe , these two classes are NioUnsafe implemented class AbstractNioChannel subclass, which sub-class is that in the end? We look NioServerSocketChannel when you create is created NioByteUnsafe or NioMessageUnsafe .

public class NioServerSocketChannel extends AbstractNioMessageChannel
                             implements io.netty.channel.socket.ServerSocketChannel {
        public NioServerSocketChannel() {
                this(newSocket(DEFAULT_SELECTOR_PROVIDER));
        }
}
复制代码
public NioServerSocketChannel(ServerSocketChannel channel) {
        super(null, channel, SelectionKey.OP_ACCEPT);
        config = new NioServerSocketChannelConfig(this, javaChannel().socket());
}
复制代码
public abstract class AbstractNioMessageChannel extends AbstractNioChannel {
    protected AbstractNioMessageChannel(Channel parent, SelectableChannel ch, int readInterestOp) {
          super(parent, ch, readInterestOp);
      }
}
复制代码
public abstract class AbstractNioChannel extends AbstractChannel {
	protected AbstractNioChannel(Channel parent, SelectableChannel ch, int readInterestOp) {
    		super(parent);
	}
}

复制代码
public abstract class AbstractChannel extends DefaultAttributeMap implements Channel {
        protected AbstractChannel(Channel parent) {
                this.parent = parent;
                unsafe = newUnsafe();
                pipeline = new DefaultChannelPipeline(this);
        }
}
复制代码

     NioServerSocketChannel is AbstractNioMessageChannel subclass, AbstractNioMessageChannel is AbstractNioChannel subclass, newUnsafe () is AbstractChannel abstract method, then we know from here, AbstractNioMessageChannel realized newUnsafe AbstractChannel of () abstract method, whereby a judge, we choose AbstractNioMessageChannel internal class NioMessageUnsafe of the Read () .

private final class NioMessageUnsafe extends AbstractNioUnsafe {
    private final List<Object> readBuf = new ArrayList<Object>();
    @Override
    public void read() {
        ...
        for (;;) {
           int localRead = doReadMessages(readBuf);
           ...
    }
    setReadPending(false);
    int size = readBuf.size();
    for (int i = 0; i < size; i ++) {
            pipeline.fireChannelRead(readBuf.get(i));
    }
    readBuf.clear();
    pipeline.fireChannelReadComplete();
    ...
}
复制代码

     Here in two parts, one is processing messages, one is handling events.
          Processing 1. Message

@Override
protected int doReadMessages(List<Object> buf) throws Exception {
    SocketChannel ch = javaChannel().accept();
    ...
    buf.add(new NioSocketChannel(this, ch));
    return 1;
    ...
}
复制代码

     Accepted a client SocketChannel , encapsulated NioSocketChannel , added to the list set, we look at new new NioSocketChannel () .

public class NioSocketChannel extends AbstractNioByteChannel implements io.netty.channel.socket.SocketChannel {
	public NioSocketChannel(Channel parent, SocketChannel socket) {
    		super(parent, socket);
    		config = new NioSocketChannelConfig(this, socket.socket());
	}
}
复制代码
public abstract class AbstractNioByteChannel extends AbstractNioChannel {
	protected AbstractNioByteChannel(Channel parent, SelectableChannel ch) {
    		super(parent, ch, SelectionKey.OP_READ);
	}

    @Override
    protected AbstractNioUnsafe newUnsafe() {
    	return new NioByteUnsafe();
    }

    protected class NioByteUnsafe extends AbstractNioUnsafe {
	    @Override
	    public final void read() {
		    ...
	    }
    }
}
复制代码

     AbstractNioByteChannel also inherited AbstractNioChannel , and realized newUnsafe () method, which we can infer that when a client first connects, taking AbstractNioMessageChannel subclass NioMessageUnsafe of the Read () , when the client sends data to go It is AbstractNioByteChannel inner class AbstractNioUnsafe the read () method.
         2. Handling events

   for (int i = 0; i < size; i ++) {
    	   pipeline.fireChannelRead(readBuf.get(i));
     }

复制代码
@Override
public ChannelPipeline fireChannelRead(Object msg) {
    head.fireChannelRead(msg);
    return this;
}
复制代码
@Override
public ChannelHandlerContext fireChannelRead(final Object msg) {
    final AbstractChannelHandlerContext next = findContextInbound();
    EventExecutor executor = next.executor();
    if (executor.inEventLoop()) {
        next.invokeChannelRead(msg);
    } else {
        executor.execute(new OneTimeTask() {
            @Override
            public void run() {
                next.invokeChannelRead(msg);
            }
        });
    }
    return this;
}
复制代码

    

     From next of debug can be seen, the current handler is ServerBootstrapAcceptor this processor to handle ChannelRead () method, if looked at on a "Netty source parsing - the server startup process analysis" will know, this is the init () method in pipeline.addLast (new new ServerBootstrapAcceptor ()) . Why not p.addLast (new ChannelInitializer ())? Because ChannelInitializer.channelRegistered () will delete the current initChannel processor.

public final void channelRegistered(ChannelHandlerContext ctx) throws Exception {
    initChannel((C) ctx.channel());
    ctx.pipeline().remove(this);
    ctx.fireChannelRegistered();
}
复制代码

     We continue to look ServerBootstrapAcceptor of ChannelRead () method.

@Override
@SuppressWarnings("unchecked")
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    final Channel child = (Channel) msg;
    child.pipeline().addLast(childHandler);
    for (Entry<ChannelOption<?>, Object> e: childOptions) {
       try {
          if (!child.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
              logger.warn("Unknown channel option: " + e);
          }
        } catch (Throwable t) {
              logger.warn("Failed to set a channel option: " + child, t);
        }
    }
    for (Entry<AttributeKey<?>, Object> e: childAttrs) {
         child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
    }
    try {
        childGroup.register(child).addListener(new ChannelFutureListener() {
           @Override
           public void operationComplete(ChannelFuture future) throws Exception {
               if (!future.isSuccess()) {
                   forceClose(child, future.cause());
                }
            }
        });
     } catch (Throwable t) {
           forceClose(child, t);
     }
}
复制代码

     Here three steps
         (1) to childHandler added to the processor, this come from? That is, from the very beginning set serverBootstrap.childHandler (new new IOChannelInitialize ()) .
         (2) set some parameters.
         (3) work thread pool register client Channel .

@Override
public ChannelFuture register(Channel channel) {
    return next().register(channel);
}

复制代码
@Override
public EventLoop next() {
    return (EventLoop) super.next();
}
复制代码
@Override
public EventExecutor next() {
    return chooser.next();
}
复制代码
private final class GenericEventExecutorChooser implements EventExecutorChooser {
    @Override
    public EventExecutor next() {
        return children[Math.abs(childIndex.getAndIncrement() % children.length)];
    }
}
复制代码

     From work choose a thread pool thread to execute the Register .

@Override
public ChannelFuture register(Channel channel) {
    return register(channel, new DefaultChannelPromise(channel, this));
}
复制代码
@Override
public ChannelFuture register(final Channel channel, final ChannelPromise promise) {
	 ...
        channel.unsafe().register(this, promise);
        return promise;
}
复制代码
@Override
public final void register(EventLoop eventLoop, final ChannelPromise promise) {
	 ...
     AbstractChannel.this.eventLoop = eventLoop;
     if (eventLoop.inEventLoop()) {
     register0(promise);
     } else {
          try {
              eventLoop.execute(new OneTimeTask() {
              @Override
              public void run() {
                 register0(promise);
              }
              });
           } catch (Throwable t) {
	            ...
           }
     }
}
复制代码
@Override
protected void doRegister() throws Exception {
	...
	selectionKey = javaChannel().register(eventLoop().selector, 0, this);
	...
}
复制代码

     Behind the process and on an "Netty source parsing - the server startup process analysis" of the registration process is the same, the difference is registered when the service starts in the boss perform the registration task queue thread pool, the client is registered in the new access work thread pool task execution queue register0 () method, and the work of the thread pool selector registered to Java NIO several issues here, we can answer the opening of: how the client access? netty 's boss thread receives a client TCP how the connection request to link registration to worker thread pool? Now we have left a question: read and write I / O events is how where to start?
     We return to the beginning of the article

private void processSelectedKeysOptimized(SelectionKey[] selectedKeys) {
    for (int i = 0;; i ++) {
        final SelectionKey k = selectedKeys[i];
        if (k == null) {
            break;
        }
        selectedKeys[i] = null;

        final Object a = k.attachment();

        if (a instanceof AbstractNioChannel) {
               processSelectedKey(k, (AbstractNioChannel) a);
        } else {
               @SuppressWarnings("unchecked")
               NioTask<SelectableChannel> task = (NioTask<SelectableChannel>) a;
               processSelectedKey(k, task);
        }
             ...
    } 
}
复制代码

     Front boss thread pool here completed the client access connection and link to the registration worker thread pool task queue, add a read monitor events so now work thread stop cycle selectedKeys there are no pending events, when pending events, then performs processSelectedKey () method.

private static void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
	...
	int readyOps = k.readyOps();
	if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
    		unsafe.read();
    		...
	}
	...
}
复制代码

     Here unsafe.read () Select AbstractNioByteChannel of the Read () .

@Override
public final void read() {
    final ChannelConfig config = config();
    if (!config.isAutoRead() && !isReadPending()) {
        // ChannelConfig.setAutoRead(false) was called in the meantime
        removeReadOp();
        return;
    }
    final ChannelPipeline pipeline = pipeline();
    final ByteBufAllocator allocator = config.getAllocator();
    final int maxMessagesPerRead = config.getMaxMessagesPerRead();
    RecvByteBufAllocator.Handle allocHandle = this.allocHandle;
    if (allocHandle == null) {
       this.allocHandle = allocHandle = config.getRecvByteBufAllocator().newHandle();
    }
    ByteBuf byteBuf = null;
    int messages = 0;
    boolean close = false;
    try {
       int totalReadAmount = 0;
       boolean readPendingReset = false;
       do {
          byteBuf = allocHandle.allocate(allocator);
          int writable = byteBuf.writableBytes();
          int localReadAmount = doReadBytes(byteBuf);
          if (localReadAmount <= 0) {
           // not was read release the buffer
              byteBuf.release();
              byteBuf = null;
              close = localReadAmount < 0;
              break;
           }
          if (!readPendingReset) {
               readPendingReset = true;
               setReadPending(false);
          }
          pipeline.fireChannelRead(byteBuf);
          byteBuf = null;

          if (totalReadAmount >= Integer.MAX_VALUE - localReadAmount) {
               totalReadAmount = Integer.MAX_VALUE;
               break;
          }
          totalReadAmount += localReadAmount;

          if (!config.isAutoRead()) {
               break;
          }

          if (localReadAmount < writable) {
              break;
          }
       } while (++ messages < maxMessagesPerRead);
         pipeline.fireChannelReadComplete();
         allocHandle.record(totalReadAmount);

        if (close) {
            closeOnRead(pipeline);
            close = false;
        }
     } catch (Throwable t) {
          handleReadException(pipeline, byteBuf, t, close);
     } finally {
         if (!config.isAutoRead() && !isReadPending()) {
                removeReadOp();
          }
     }
    }
}
复制代码

     This large segment of the codes into several parts
         1 Set cycle time, 16 times, will not finish until the next select continues reading, maxMessagesPerRead default to 16.
         2 . Acquires cache operation Handler , config.getRecvByteBufAllocator (). NewHandle () .
         3 . Application cache space, allocHandle.allocate (allocator) .
         4 from the socket in the read data into byteBuf in.
         5 Passing read event to the next handler processor.
         6 . After reading reading time to send the next handler processor read events we look at the back of the other details of the article and then parsed in detail.

@Override
public ChannelPipeline fireChannelRead(Object msg) {
    head.fireChannelRead(msg);
    return this;
}
复制代码
@Override
public ChannelHandlerContext fireChannelRead(final Object msg) {
    if (msg == null) {
        throw new NullPointerException("msg");
    }

    final AbstractChannelHandlerContext next = findContextInbound();
    EventExecutor executor = next.executor();
    if (executor.inEventLoop()) {
        next.invokeChannelRead(msg);
 } else {
        executor.execute(new OneTimeTask() {
            @Override
            public void run() {
                next.invokeChannelRead(msg);
            }
        });
    }
    return this;
}
复制代码

     Handler sequence of events is HeadContextHandler -> IdleStateHandler -> IOHandler - > TailContext

private void invokeChannelRead(Object msg) {
    try {
        ((ChannelInboundHandler) handler()).channelRead(this, msg);
    } catch (Throwable t) {
        notifyHandlerException(t);
    }
}
复制代码

     Into IdleStateHandler

@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
    if (readerIdleTimeNanos > 0 || allIdleTimeNanos > 0) {
        reading = true;
        firstReaderIdleEvent = firstAllIdleEvent = true;
    }
    ctx.fireChannelRead(msg);
}
复制代码

     Set the read event is true, as later detected in preparation state, continue to pass down the read event, this is IOHandler reading event.

public class IOHandler extends ChannelInboundHandlerAdapter {
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        super.channelRead(ctx, msg);
        System.out.println(msg.toString());
    }
	...
}
复制代码

     To the user-defined handler processing read event, since the read I / O events is how where to start, how to the user handler process has been parsed completely.

Summary:
     1.BOSS threading NioServerSocketChannel of accept events and client to work the task queue, the task execution queue redister0 () method, the read register event to work thread Selector .
     2.work thread polling selectkeys , when an event up to send the cached data to the user Handler .

Guess you like

Origin juejin.im/post/5cecfd09f265da1ba647ccf2