Dubbo source code analysis 4: The service provider receives the request and returns the result

Insert picture description here

Introduction

The previous article said that we started a NettyServer. All the friends who have written the Netty program know that we
process the business logic by implementing the ChannelHandler interface, and then add the ChannelHandler to the ChannelPipeline, and a request is processed by the ChannelHandler on the ChannelPipeline in turn , A typical chain of responsibility model

Insert picture description here

But when NettyServer was started in the last section, we saw that only one ChannelHandler, NettyServerHandler (implementing the io.netty.channel.ChannelHandler interface) was added to the ChannelPipeline. In fact, this NettyServerHandler did nothing, just forward the request to Dubbo ChannelHandler (implements org.apache.dubbo.remoting.ChannelHandler).

Note that the ChannelHandler interface is defined in both Netty and Dubbo. The ChannelHandler in Netty executes in the chain of responsibility mode, while the ChannelHandler in Dubbo executes in the decorator mode . Dubbo only needs to redefine a ChannelHandler interface, mainly to avoid coupling with the specific communication layer framework. After all, the network communication framework is not only Netty.

So the real request execution will go through the following ChannelHandlers, of which only NettyServerHandler implements the ChannelHandler interface in the Netty framework, and the rest implement the ChannelHandler interface in Dubbo
Insert picture description here

Let me talk about the role of these ChannelHandlers

ChannelHandler effect
NettyServerHandler Handle Netty server events, such as connection, disconnection, reading, writing, and exceptions
MultiMessageHandler Multi-message batch processing
HeartbeatHandler Heartbeat processing
AllChannelHandler Put all Netty requests into the business thread pool for processing
DecodeHandler Decode the message
HeaderExchangeHandler Package processing Request/Reponse, and telnet request
ExchangeHandlerAdapter Find the service method and call

Receive request

NettyServer will be started during the export of Dubbo service, that is, the NettyServer#doOpen method will be executed

protected void doOpen() throws Throwable {
    
    
    bootstrap = new ServerBootstrap();

    bossGroup = new NioEventLoopGroup(1, new DefaultThreadFactory("NettyServerBoss", true));
    workerGroup = new NioEventLoopGroup(getUrl().getPositiveParameter(Constants.IO_THREADS_KEY, Constants.DEFAULT_IO_THREADS),
            new DefaultThreadFactory("NettyServerWorker", true));

    final NettyServerHandler nettyServerHandler = new NettyServerHandler(getUrl(), this);
    channels = nettyServerHandler.getChannels();

    bootstrap.group(bossGroup, workerGroup)
            .channel(NioServerSocketChannel.class)
            .childOption(ChannelOption.TCP_NODELAY, Boolean.TRUE)
            .childOption(ChannelOption.SO_REUSEADDR, Boolean.TRUE)
            .childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
            .childHandler(new ChannelInitializer<NioSocketChannel>() {
    
    
                @Override
                protected void initChannel(NioSocketChannel ch) throws Exception {
    
    
                    // FIXME: should we use getTimeout()?
                    int idleTimeout = UrlUtils.getIdleTimeout(getUrl());
                    NettyCodecAdapter adapter = new NettyCodecAdapter(getCodec(), getUrl(), NettyServer.this);
                    ch.pipeline()//.addLast("logging",new LoggingHandler(LogLevel.INFO))//for debug
                            .addLast("decoder", adapter.getDecoder()) // 解码器handler
                            .addLast("encoder", adapter.getEncoder()) // 编码器handler
                            // 心跳检查handler
                            .addLast("server-idle-handler", new IdleStateHandler(0, 0, idleTimeout, MILLISECONDS))
                            .addLast("handler", nettyServerHandler);
                }
            });
    // bind
    ChannelFuture channelFuture = bootstrap.bind(getBindAddress());
    channelFuture.syncUninterruptibly();
    channel = channelFuture.channel();

}

It can be seen that NettyServerHandler is a Handler that handles business logic. When a message is received, the NettyServerHandler#channelRead method will be activated

// NettyServerHandler.java
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
    
    
    NettyChannel channel = NettyChannel.getOrAddChannel(ctx.channel(), url, handler);
    try {
    
    
        handler.received(channel, msg);
    } finally {
    
    
        NettyChannel.removeChannelIfDisconnected(ctx.channel());
    }
}

Then execute the received methods
Insert picture description here
MultiMessageHandler and HeartbeatHandler of the following Handler in turn , which have little relationship with the main process, so I don’t analyze it carefully and go directly to AllChannelHandler.

Then to AllChannelHandler, the request is placed in the business thread pool for execution (a variety of different implementations can be configured through Dubbo Spi, a later article will analyze the thread model and thread pool strategy in detail)

// AllChannelHandler.java
public void received(Channel channel, Object message) throws RemotingException {
    
    
    ExecutorService cexecutor = getExecutorService();
    try {
    
    
        // 将请求和响应消息派发到线程池中处理
        cexecutor.execute(new ChannelEventRunnable(channel, handler, ChannelState.RECEIVED, message));
    } catch (Throwable t) {
    
    
    }
}

Then to DecodeHandler, the message is decoded. Because the service provider and service consumer use AllChannelHandler by default, the message type may be Request or Response

// DecodeHandler
public void received(Channel channel, Object message) throws RemotingException {
    
    
    if (message instanceof Decodeable) {
    
    
        // 对 Decodeable 接口实现类对象进行解码
        decode(message);
    }

    if (message instanceof Request) {
    
    
        // 对 Request 的 data 字段进行解码
        decode(((Request) message).getData());
    }

    if (message instanceof Response) {
    
    
        // 对 Response 的 result 字段进行解码
        decode(((Response) message).getResult());
    }

    // 解码完毕后的下一站为 HeaderExchangeHandler
    handler.received(channel, message);
}

The next stop is HeaderExchangeHandler,
which has a lot of code, mainly to encapsulate and process Request/Reponse. The request response is implemented in this Handler.

If the request does not require a response, the ExchangeHandlerAdapter#received (anonymous internal class of DubboProtocol) method
will be called. If the request needs a response, the ExchangeHandlerAdapter#reply will be called

Finally reached the terminal ExchangeHandlerAdapter

// 在DubboProtocol.java中
// ExchangeHandlerAdapter.java
 @Override
 public CompletableFuture<Object> reply(ExchangeChannel channel, Object message) throws RemotingException {
    
    

     if (!(message instanceof Invocation)) {
    
    
         throw new RemotingException(channel, "Unsupported request: "
                 + (message == null ? null : (message.getClass().getName() + ": " + message))
                 + ", channel: consumer: " + channel.getRemoteAddress() + " --> provider: " + channel.getLocalAddress());
     }

     Invocation inv = (Invocation) message;
     // 获取 Invoker 实例
     // 服务导出的时候在 exporterMap 中保存了 serviceKey -> Exporter 的映射关系
     // 这里根据inv得到serviceKey得到 Exporter,再得到 Invoker
     Invoker<?> invoker = getInvoker(channel, inv);
     // need to consider backward-compatibility if it's a callback

	 // 省略回调相关的代码
	
     RpcContext rpcContext = RpcContext.getContext();
     // 用ThreadLocal来保存上下文信息
     rpcContext.setRemoteAddress(channel.getRemoteAddress());
     // 通过 Invoker 调用具体的服务
     // 这里是 AbstractProxyInvoker
     Result result = invoker.invoke(inv);

     // 异步执行
     if (result instanceof AsyncRpcResult) {
    
    
         // thenApply相当于Stream中的map,对元素进行转换
         return ((AsyncRpcResult) result).getResultFuture().thenApply(r -> (Object) r);

     } else {
    
    
         // 同步执行,直接设置结果返回
         return CompletableFuture.completedFuture(result);
     }
 }

It is mainly based on the Invocation object (encapsulating the requested method name, parameter type, and parameters) to find the corresponding Invoker, and then call the Invoker#invoke method

When the service is exported, this mapping relationship has been stored in the following Map.

public abstract class AbstractProtocol implements Protocol {
    
    
    protected final Map<String, Exporter<?>> exporterMap = new ConcurrentHashMap<String, Exporter<?>>();
}

When exporting from the service, I know that this Invoker is the most original AbstractProxyInvoker, and then it is decorated by various decorations, a typical decorator pattern

When calling the following method (call the local method to get the result)

// DubboProtocol
Result result = invoker.invoke(inv);

Insert picture description here
ProtocolFilterWrapper$1 is an anonymous inner class, and then every time the anonymous inner class calls the implementation of the Filter interface, it finally calls the AbstractProxyInvoker#doInvoker method. When the service is exported, it has been said that this Invoker is created by JavassistProxyFactory. I will talk about the interceptor in detail later

public class JavassistProxyFactory extends AbstractProxyFactory {
    
    

    /**
     * 针对provider端,将服务对象包装成一个Invoker对象
     */
    @Override
    public <T> Invoker<T> getInvoker(T proxy, Class<T> type, URL url) {
    
    
        // TODO Wrapper cannot handle this scenario correctly: the classname contains '$'
        final Wrapper wrapper = Wrapper.getWrapper(proxy.getClass().getName().indexOf('$') < 0 ? proxy.getClass() : type);
        // 重写类AbstractProxyInvoker类的doInvoke方法
        return new AbstractProxyInvoker<T>(proxy, type, url) {
    
    
            @Override
            protected Object doInvoke(T proxy, String methodName,
                                      Class<?>[] parameterTypes,
                                      Object[] arguments) throws Throwable {
    
    
                // 这个方法里面调用执行本地方法
                return wrapper.invokeMethod(proxy, methodName, parameterTypes, arguments);
            }
        };
    }

}

Wrapper can find the corresponding service implementation method and execute it according to the method name, parameter type, and parameters.

This wrapper actually encapsulates the implementation class of the Service interface, avoiding calling the implementation class of the Service interface through reflection and improving performance.

When the AbstractProxyInvoker#doInvoke method is executed, the requested method will be called and the result will be returned

Finally, to summarize, the processing of this request is

  1. NettyServerHandler#channelRead
  2. NettyServer(AbstractPeer#received)
  3. MultiMessageHandler#received
  4. HeartbeatHandler#received
  5. AllChannelHandler#received (here the default is AllChannelHandler, the thread model and thread pool strategy can be determined through SPI)
  6. DecodeHandler#received
  7. HeaderExchangeHandler#received
  8. ExchangeHandlerAdapter#reply

When the HeaderExchangeHandler receives the return value, it will call the channel.send(res) method

So when the result is returned, the NettyServerHandler#write method will be called first

Return response

The entire call link is as follows, I will not chase it, you can understand it after you chase it

  1. NettyServerHandler#write
  2. NettyServer(AbstractPeer#sent)
  3. MultiMessageHandler#sent
  4. HeartbeatHandler#sent
  5. AllChannelHandler#sent
  6. DecodeHandler#sent
  7. HeaderExchangeHandler#sent
  8. ExchangeHandlerAdapter#sent (an anonymous inner class in DubboProtocol, which is an empty implementation)

Welcome to follow

Insert picture description here

Reference blog

Client timeout or server timeout
[1]https://juejin.im/post/6844903857416323079

Guess you like

Origin blog.csdn.net/zzti_erlie/article/details/108189766