Netty Reactor threading model Detailed

I. Introduction

1. Reactor What is that?

     Reactor pattern (reactor mode) is a mode for processing an event handler passed simultaneously through one or more inputs to the service request server. Service Handler multiplexed incoming requests and dispatches them synchronized to the associated handler. Key points:

(1) is driven by events

(2) processing a plurality of input

(3) multiplexing the distributed event processing corresponding Handler

2. Reactor main components

(1) Reactor

     Responsible for responding to an event, the event distribution bound Handler handles the event. Corresponding to the netty NioEventLoop.run (), processSelectedKeys ().

(2) Handler

     Event handler, bind the type of event, task responsible for executing the corresponding event to handle the event. Corresponding to IdleStateHandler netty like.

(3) Acceptor

     Acceptor belongs to a handler in, because even more special, independent speaking, the event is receiving reactor type, responsible for initializing selector and receive buffer queue. Corresponding to the netty ServerBootstrapAcceptor.

Second, the process

     Each Reactor Reactor thread pool thread will have its own Selector, threads and distribute the event loop logic. mainReactor can be only one, but generally there will be more subReactor. mainReacto thread is responsible for receiving the connection request of the client, and then transferred to the received SocketChannel subReactor, and complete the communication to the subReactor client. Source resolve

1. Create mainReactor thread pool thread pool and subReactor

	bossGroup = new NioEventLoopGroup();
	workGroup = new NioEventLoopGroup(4);
复制代码
protected MultithreadEventExecutorGroup(int nThreads, ThreadFactory threadFactory, Object... args) {
     children = new SingleThreadEventExecutor[nThreads];
     ...
     for (int i = 0; i < nThreads; i ++) {
            ...
            children[i] = newChild(threadFactory, args);
            ...
     }
}
复制代码
@Override
protected EventExecutor newChild(
        ThreadFactory threadFactory, Object... args) throws Exception {
    return new NioEventLoop(this, threadFactory, (SelectorProvider) args[0]);
}
复制代码

     We created here mainReactor and subReactor thread pool thread and created eventLoop

NioEventLoop(NioEventLoopGroup parent, ThreadFactory threadFactory, SelectorProvider selectorProvider) {
    super(parent, threadFactory, false);
    if (selectorProvider == null) {
        throw new NullPointerException("selectorProvider");
    }
    provider = selectorProvider;
    selector = openSelector();
}
复制代码

     EventLoop each thread will have its own selector, where eventLoop thread has not started, then start back, will execute run (inside selector.select).

2. mainReactor binding selector OP_ACCEPT events, and open the thread loop execution selector.select ();

ChannelFuture regFuture = group().register(channel); 这里的group()是bossGroup

@Override
public ChannelFuture register(Channel channel) {
    return next().register(channel);
}
复制代码

     next () execution

@Override
public EventLoop next() {
    return (EventLoop) super.next();
}
复制代码
private final class PowerOfTwoEventExecutorChooser implements EventExecutorChooser {
    @Override
    public EventExecutor next() {
        return children[childIndex.getAndIncrement() & children.length - 1];
    }
}
复制代码

     Take the first thread pool from a eventLoop

@Override
public ChannelFuture register(final Channel channel, final ChannelPromise promise) {
     ...
    channel.unsafe().register(this, promise);
    return promise;
}
复制代码
@Override
public final void register(EventLoop eventLoop, final ChannelPromise promise) {
    ...
    AbstractChannel.this.eventLoop = eventLoop;

    if (eventLoop.inEventLoop()) {
        register0(promise);
    } else {
    try {
        eventLoop.execute(new OneTimeTask() {
            @Override
            public void run() {
                register0(promise);
            }
        });
    } catch (Throwable t) {
}
复制代码

     Here the NioServerSocketChannel mainReactor of eventLoop and server bindings. Because just started is the main thread execution eventLoop.execute, where mainReactor only start a thread.

@Override
public void execute(Runnable task) {
    boolean inEventLoop = inEventLoop();
    if (inEventLoop) {
        addTask(task);
    } else {
        startThread();
        addTask(task);
        ...
    }
        ...
}
复制代码

     Perform execute in startThread (), officially launched mainReactor thread loop, and register0 (promise) to join the Task taskQueue, so that mainReactor loop.

private void register0(ChannelPromise promise) {
    doRegister();
    neverRegistered = false;
    registered = true;
    safeSetSuccess(promise);
    pipeline.fireChannelRegistered();
    if (firstRegistration && isActive()) {
        pipeline.fireChannelActive();
    }
}
复制代码
@Override
protected void doRegister() throws Exception {
    boolean selected = false;
    for (;;) {
        ...
        selectionKey = javaChannel().register(eventLoop().selector, 0, this);
	...	
    }
}
复制代码

     Here in the mainReactor eventLoop the selector register a listener for the operating bits 0 and bind NioServerSocketChannel server to the mainSubReactor thread doBind () -> doBind0 () -> channel.bind () -> ... - -> next.invokeBind () -.> HeadContext Bind () -> unsafe.bind () -> pipeline.fireChannelActive () -> channel.read () -> ... -> doBeginRead () in modify OP_ACCEPT (16) monitor the operation position.

@Override
protected void doBeginRead() throws Exception {
…
    final int interestOps = selectionKey.interestOps();
    if ((interestOps & readInterestOp) == 0) {
        selectionKey.interestOps(interestOps | readInterestOp);
    }
}
复制代码
  自此,mainReactor的eventLoop从run开始循环执行selector.select。
  注:readInterestOp的值来自于创建NioServerSocketChannel的构造函数
复制代码
public NioServerSocketChannel(ServerSocketChannel channel) {
    super(null, channel, SelectionKey.OP_ACCEPT);
    config = new NioServerSocketChannelConfig(this, javaChannel().socket());
}
复制代码

3. subReactor event registration OP_READ

     Upon receipt of client connections, will ServerBootstrapAcceptor in the client's Channel register on subReactor thread and bind to the channel selector subReactor thread, listening OP_READ event of the client channel

if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
    unsafe.read();
}
复制代码

     When the monitor connected to the client, the server performs AbstractNioUnsafe's Read ();

@Override
public void read() {
    ...
    int localRead = doReadMessages(readBuf);
    ...
    for (int i = 0; i < size; i ++) {
        pipeline.fireChannelRead(readBuf.get(i));
    }
    ...
    pipeline.fireChannelReadComplete();
    ...
}
复制代码

(1) doReadMessage

@Override
protected int doReadMessages(List<Object> buf) throws Exception {
    SocketChannel ch = javaChannel().accept();
    ...
    buf.add(new NioSocketChannel(this, ch));
    ...
}
复制代码
public NioSocketChannel(Channel parent, SocketChannel socket) {
    super(parent, socket);
    config = new NioSocketChannelConfig(this, socket.socket());
}
复制代码
protected AbstractNioByteChannel(Channel parent, SelectableChannel ch) {
    super(parent, ch, SelectionKey.OP_READ);
}
复制代码

     Set Client monitor channel bit is OP_READ (1)

(2) pipeline.fireChannelRead()

private static class ServerBootstrapAcceptor extends ChannelInboundHandlerAdapter {
public void channelRead(ChannelHandlerContext ctx, Object msg) {
   final Channel child = (Channel) msg;
   child.pipeline().addLast(childHandler);

   for (Entry<ChannelOption<?>, Object> e: childOptions) {
       try {
           if (!child.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
               logger.warn("Unknown channel option: " + e);
           }
       } catch (Throwable t) {
           logger.warn("Failed to set a channel option: " + child, t);
       }
   }

   for (Entry<AttributeKey<?>, Object> e: childAttrs) {
       child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
   }

   try {
        childGroup.register(child).addListener(new ChannelFutureListener() {
        @Override
        public void operationComplete(ChannelFuture future) throws Exception {
            if (!future.isSuccess()) {
                forceClose(child, future.cause());
            }
         }
       });
     } catch (Throwable t) {
         forceClose(child, t);
     }
}
}
复制代码

     ServerBootstrapAcceptor not only subReactor bind client channel, also some of the parameters for the client channel initialization

@Override
public ChannelFuture register(Channel channel) {
    return next().register(channel);
}
复制代码

     And register the same as above, except that instead of subReactor mainReactor thread pool thread pool. Here from subReactor take a thread pool thread selector client channel bindings, and listening to the client 0 events.

(3) pipeline.fireChannelReadComplete()

@Override
public ChannelPipeline fireChannelReadComplete() {
    head.fireChannelReadComplete();
    if (channel.config().isAutoRead()) {
        read();
    }
    return this;
}
复制代码

     read()-->tail.read()-->next.invokeRead()-->HeadContext. read()-->…--> doBeginRead()

@Override
protected void doBeginRead() throws Exception {
    ...
    final int interestOps = selectionKey.interestOps();
    if ((interestOps & readInterestOp) == 0) {
        selectionKey.interestOps(interestOps | readInterestOp);
    }
}
复制代码

     Here listens bit to OP_READ (1)

4. subReactor processing read event

	if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
            unsafe.read();
            ...
        }
复制代码

     NioByteUnsafe into the read () method

@Override
public final void read() {
    ...
    final ChannelPipeline pipeline = pipeline();
    final ByteBufAllocator allocator = config.getAllocator();
    ...
    byteBuf = allocHandle.allocate(allocator);
    ...
    pipeline.fireChannelRead(byteBuf);
    ...
}
复制代码
@Override
public ChannelPipeline fireChannelRead(Object msg) {
    head.fireChannelRead(msg);
    return this;
}
复制代码
private void invokeChannelRead(Object msg) {
    try {
        ((ChannelInboundHandler) handler()).channelRead(this, msg);
    } catch (Throwable t) {
        notifyHandlerException(t);
    }
}
复制代码
public class InBoundHandlerB extends ChannelInboundHandlerAdapter {
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        System.out.println("InBoundHandlerB: " + msg);
        super.channelRead(ctx, msg);
    }
}
复制代码

     Here the received client message processing

to sum up

     Reactor thread pool How many, how many will be created selector, channel eventLoop will bind with the server mainReactor, and focus only on the ACCEPT event server channel, channel eventLoop will be with the client subReactor binding, and only focus on the client READ channel of the event.

     selector mainReactor and subReactor cycle of each selector, mainReactor ACCEPT will cycle events, selector subReactor will cycle READ event, after mainReactor receive client connections, performs channelRead method ServerBootstrapAcceptor, connect the client and subReactor binding.

Guess you like

Origin juejin.im/post/5d878a23f265da03a65354c2