Dubbo Learning Record (15) - Service Call [1] - Hander packaging process of server-side Netty and server-side thread model

Dubbo service call

I have written more than a dozen articles before, and I have a certain understanding of the operation of Dubbo. The Dubbo service call is the most important thing, and it takes at least 5-6 articles to write this process visually;

Netty's hander packaging on the server side

In the process of service export, two things will be done. One thing is to convert the service provider's information into a URL and put it in the registration center; the other is to start the server; and the data process of netty processing requests depends on each handler, so Need to understand the Handler that handles the request;

    private ExchangeServer createServer(URL url) {
    
    
    //省略部分代码
        ExchangeServer server;
        try {
    
    
            // requestHandler是请求处理器,类型为ExchangeHandler
            // 表示从url的端口接收到请求后,requestHandler来进行处理
            server = Exchangers.bind(url, requestHandler);
        } catch (RemotingException e) {
    
    
            throw new RpcException("Fail to start server(url: " + url + ") " + e.getMessage(), e);
        }
        return server;
    }

First, the requestHandler will be passed in, and this handler is an ExchangeHandlerAdapter class;

ExchangeHandlerAdapter

When a request arrives, it first calls received(Channel channel, Object message), and then calls reply(ExchangeChannel channel, Object message) to process the request. The message is the requested data, and the channel represents the long connection with the client; the working
process : Finally, call Method.invoke to execute the service through reflection technology;

  1. Type conversion, convert Object to Invocation;
  2. Call getInvoker to obtain the executor of the service provider; (this Invoker will be wrapped in multiple layers)
  3. Set remote address remoteAddress to RpcContext
  4. Call the invoke method of the executor and return the result;
  5. return a CompletionFuture instance;
private ExchangeHandler requestHandler = new ExchangeHandlerAdapter() {
    
    

        @Override
        public CompletableFuture<Object> reply(ExchangeChannel channel, Object message) throws RemotingException {
    
    
            Invocation inv = (Invocation) message;
            Invoker<?> invoker = getInvoker(channel, inv); 
			//省略部分代码
            RpcContext.getContext().setRemoteAddress(channel.getRemoteAddress());
            Result result = invoker.invoke(inv);
            return result.completionFuture().thenApply(Function.identity());
        }

        @Override
        public void received(Channel channel, Object message) throws RemotingException {
    
    
            if (message instanceof Invocation) {
    
    
                // 这是服务端接收到Invocation时的处理逻辑
                reply((ExchangeChannel) channel, message);
            } else {
    
    
                super.received(channel, message);
            }
        }
}

Exchangers.bind(url, requestHandler)

process:

  1. Call getExchanger(url) to obtain the extended implementation class of Exchanger through the SPI mechanism. The default implementation class is HeaderExhanger
  2. Call the bind method to start netty;
    public static ExchangeServer bind(URL url, ExchangeHandler handler) throws RemotingException {
    
    
        // codec表示协议编码方式
        url = url.addParameterIfAbsent(Constants.CODEC_KEY, "exchange");
        // 通过url得到HeaderExchanger, 利用HeaderExchanger进行bind,将得到一个HeaderExchangeServer
        return getExchanger(url).bind(url, handler);
    }

HeaderExchanger#bind(Url, hander)

Work:

  1. First, the incoming ExchangeHandlerAdapter instance handler will be packaged as HeaderExchangerHandler;
  2. Then wrap the HeaderExchangerHandler as an instance of DecodeHandler;
  3. Call the Transporters#bind method to create a NettyServer,
  4. Wrap the NettyServer instance as HeaderExchangerServer; return;
    @Override
    public ExchangeServer bind(URL url, ExchangeHandler handler) throws RemotingException {
    
    

        // 下面会去启动Netty
        // 对handler包装了两层,表示当处理一个请求时,每层Handler负责不同的处理逻辑
        // 为什么在connect和bind时都是DecodeHandler,解码,解的是把InputStream解析成RpcInvocation对象
        return new HeaderExchangeServer(Transporters.bind(url, new DecodeHandler(new HeaderExchangeHandler(handler))));
    }

Transporters.bind(url, handlers)

work process:

  1. If multiple handlers are bound, when a connection comes in, each handler will be cycled to handle the connection
  2. Call the getTransport() method to obtain a Transporter instance through the SPI mechanism, which is a NettyTransporter instance by default;
  3. Call the NettyTransport#bind method to create a nettyServer;
    public static Server bind(URL url, ChannelHandler... handlers) throws RemotingException {
    
    
		//....

        // 
        ChannelHandler handler;
        if (handlers.length == 1) {
    
    
            handler = handlers[0];
        } else {
    
    
            handler = new ChannelHandlerDispatcher(handlers);
        }
        // 调用NettyTransporter去绑定,Transporter表示网络传输层
        return getTransporter().bind(url, handler);
    }

NettyTransporter#bind(url, listener)

Create a Nettyserver instance;

public class NettyTransporter implements Transporter {
    
    

    public static final String NAME = "netty";

    @Override
    public Server bind(URL url, ChannelHandler listener) throws RemotingException {
    
    
        return new NettyServer(url, listener);
    }

    @Override
    public Client connect(URL url, ChannelHandler listener) throws RemotingException {
    
    
        return new NettyClient(url, listener);
    }

}

NettyServer(URL url, ChannelHandler handler)

  1. Call the ChannelHandlers.wrap method to wrap the DecoderHandler instance;
  2. call parent class constructor;
public class NettyServer extends AbstractServer implements Server {
    
    
    private Map<String, Channel> channels;

    private ServerBootstrap bootstrap;

	private io.netty.channel.Channel channel;

    private EventLoopGroup bossGroup;
    private EventLoopGroup workerGroup;

    public NettyServer(URL url, ChannelHandler handler) throws RemotingException {
    
    
        super(url, ChannelHandlers.wrap(handler, ExecutorUtil.setThreadName(url, SERVER_THREAD_POOL_NAME)));
    }

ChannelHandlers.wrap(ChannelHandler handler, URL url)

  1. Call the ChannelHandlers#getInstance() method to get the singleton ChannelHandlers, using the hungry Chinese style;
  2. Call wrapInternal(handler, url) to wrap the DecoderHandler instance handler:
    public static ChannelHandler wrap(ChannelHandler handler, URL url) {
    
    
        return ChannelHandlers.getInstance().wrapInternal(handler, url);
    }

ChannelHandlers#wrapInternal(handler, url)

work process:

  1. Call the DecoderHandler instance as an AllChannelHandler instance through the Spi mechanism;
  2. Then wrap AllChannelHandler as a MultiMessageHandler instance;
public class ChannelHandlers {
    
    
    // 单例模式
    private static ChannelHandlers INSTANCE = new ChannelHandlers();

    protected ChannelHandlers() {
    
    
    }

    public static ChannelHandler wrap(ChannelHandler handler, URL url) {
    
    
        return ChannelHandlers.getInstance().wrapInternal(handler, url);
    }

    protected static ChannelHandlers getInstance() {
    
    
        return INSTANCE;
    }

    static void setTestingChannelHandlers(ChannelHandlers instance) {
    
    
        INSTANCE = instance;
    }

    protected ChannelHandler wrapInternal(ChannelHandler handler, URL url) {
    
    
        // 先通过ExtensionLoader.getExtensionLoader(Dispatcher.class).getAdaptiveExtension().dispatch(handler, url)
        // 得到一个AllChannelHandler(handler, url)
        // 然后把AllChannelHandler包装成HeartbeatHandler,HeartbeatHandler包装成MultiMessageHandler
        // 所以当Netty接收到一个数据时,会经历MultiMessageHandler--->HeartbeatHandler---->AllChannelHandler
        // 而AllChannelHandler会调用handler
        return new MultiMessageHandler(new HeartbeatHandler(ExtensionLoader.getExtensionLoader(Dispatcher.class)
                .getAdaptiveExtension().dispatch(handler, url)));
    }
}

Here Dispatter involves the threading model of the server, mark

AbstractServer(URL url, ChannelHandler handler)

Creating a NettyServer instance will call the parent class constructor, parent class AbstractServer, abstract class;
workflow:

  1. Call the constructor of the parent class, and assign the handler to the handler attribute of the parent class AbstractPeer;
  2. Get the local address localAddress;
  3. Obtain the IP bound to the service;
  4. Obtain the port number bound to the service;
  5. Create an instance of InetSocketAddress, which is used to create the construction parameters of the Socket connection;
  6. Call doOpen() to start netty;
public AbstractServer(URL url, ChannelHandler handler) throws RemotingException {
    
    
        super(url, handler);
        localAddress = getUrl().toInetSocketAddress();

        String bindIp = getUrl().getParameter(Constants.BIND_IP_KEY, getUrl().getHost());
        int bindPort = getUrl().getParameter(Constants.BIND_PORT_KEY, getUrl().getPort());
        if (url.getParameter(ANYHOST_KEY, false) || NetUtils.isInvalidLocalHost(bindIp)) {
    
    
            bindIp = ANYHOST_VALUE;
        }
        bindAddress = new InetSocketAddress(bindIp, bindPort);
        this.accepts = url.getParameter(ACCEPTS_KEY, DEFAULT_ACCEPTS);
        this.idleTimeout = url.getParameter(IDLE_TIMEOUT_KEY, DEFAULT_IDLE_TIMEOUT);
        try {
    
    
            doOpen();
            if (logger.isInfoEnabled()) {
    
    
                logger.info("Start " + getClass().getSimpleName() + " bind " + getBindAddress() + ", export " + getLocalAddress());
            }
        } catch (Throwable t) {
    
    
           //....
        }
//...
    }

AbstractEndpoint(url, handler)

Call the parent class constructor, set the encoding method, set the timeout period of the service provider, and set the timeout period for creating a connection.

    public AbstractEndpoint(URL url, ChannelHandler handler) {
    
    
        super(url, handler);
        this.codec = getChannelCodec(url);
        this.timeout = url.getPositiveParameter(TIMEOUT_KEY, DEFAULT_TIMEOUT);
        this.connectTimeout = url.getPositiveParameter(Constants.CONNECT_TIMEOUT_KEY, Constants.DEFAULT_CONNECT_TIMEOUT);
    }

AbstractPeer(URL url, ChannelHandler handler)

Save the handler;

public abstract class AbstractPeer implements Endpoint, ChannelHandler {
    
    

    private final ChannelHandler handler;
    private volatile URL url;
    private volatile boolean closing;
    private volatile boolean closed;
    public AbstractPeer(URL url, ChannelHandler handler) {
    
    
        this.url = url;
        this.handler = handler;
    }

NettyServer#doOpen()

In the above process, calling new NettyServer will assign MultiMessageHandler to the Handler property of NettyServer;
therefore, another identity of NettyServer is a MultiMessageHandler:
workflow:

  1. Create a ServerBootstrap server;
  2. Create worker thread group, IO event thread group;
  3. Create a NettyServerHandler, pass this in, that is, pass a MultiMessageHandler, and package it as a NettyServerHandler instance;
  4. Setting of server parameters;
@Override
    protected void doOpen() throws Throwable {
    
    
        bootstrap = new ServerBootstrap();

        bossGroup = new NioEventLoopGroup(1, new DefaultThreadFactory("NettyServerBoss", true));
        workerGroup = new NioEventLoopGroup(getUrl().getPositiveParameter(IO_THREADS_KEY, Constants.DEFAULT_IO_THREADS),
                new DefaultThreadFactory("NettyServerWorker", true));

        final NettyServerHandler nettyServerHandler = new NettyServerHandler(getUrl(), this);
        channels = nettyServerHandler.getChannels();

        bootstrap.group(bossGroup, workerGroup)
                .channel(NioServerSocketChannel.class)
                .childOption(ChannelOption.TCP_NODELAY, Boolean.TRUE)
                .childOption(ChannelOption.SO_REUSEADDR, Boolean.TRUE)
                .childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
                .childHandler(new ChannelInitializer<NioSocketChannel>() {
    
    
                    @Override
                    protected void initChannel(NioSocketChannel ch) throws Exception {
    
    
                        // FIXME: should we use getTimeout()?
                        int idleTimeout = UrlUtils.getIdleTimeout(getUrl());
                        NettyCodecAdapter adapter = new NettyCodecAdapter(getCodec(), getUrl(), NettyServer.this);
                        ch.pipeline()//.addLast("logging",new LoggingHandler(LogLevel.INFO))//for debug
                                .addLast("decoder", adapter.getDecoder())
                                .addLast("encoder", adapter.getEncoder())
                                .addLast("server-idle-handler", new IdleStateHandler(0, 0, idleTimeout, MILLISECONDS))
                                .addLast("handler", nettyServerHandler);
                    }
                });
        // bind
        ChannelFuture channelFuture = bootstrap.bind(getBindAddress());
        channelFuture.syncUninterruptibly();
        channel = channelFuture.channel();

    }

Summary of the hander packaging process of Netty on the server side

  1. The requestHandler instance of ExchangeHandlerAdapter is wrapped as HeaderExchangerhandler
  2. Wrap the HandlerExchangerHandler instance as Decodehandler;
  3. Wrap the DecodeHandler instance as AllChannelHandler;
  4. Wrap the AllChannelHandler instance as a MultiMessageHandler instance;
  5. Wrap the MultiMessageHandler instance as NettyServerHandler;
  6. Finally, bind NettyServerHandler to the handler processor of PipeLine;

Server-side threading model

The process involved is: the process of wrapping DecodeHandler as AllChannelhandler;

    protected ChannelHandler wrapInternal(ChannelHandler handler, URL url) {
    
    
        return new MultiMessageHandler(new HeartbeatHandler(ExtensionLoader.getExtensionLoader(Dispatcher.class)
                .getAdaptiveExtension().dispatch(handler, url)));
    }

Obtain the extended implementation class of Dispatcher through the SPI mechanism,

@SPI(AllDispatcher.NAME)
public interface Dispatcher {
    
    

    /**
     * dispatch the message to threadpool.
     *
     * @param handler
     * @param url
     * @return channel handler
     */
    @Adaptive({
    
    Constants.DISPATCHER_KEY, "dispather", "channel.handler"})
    // The last two parameters are reserved for compatibility with the old configuration
    ChannelHandler dispatch(ChannelHandler handler, URL url);

}

By default, the AllDispatcher#dispatch method is used;

AllDispatcher #dispatch(handler, url)

Create an instance of AllChannelHandler, wrapping a handler;

public class AllDispatcher implements Dispatcher {
    
    

    public static final String NAME = "all";

    @Override
    public ChannelHandler dispatch(ChannelHandler handler, URL url) {
    
    
        return new AllChannelHandler(handler, url);
    }

}

AllChannelHandler

When netty receives the data, it calls the received method


public class AllChannelHandler extends WrappedChannelHandler {
    
    


    public AllChannelHandler(ChannelHandler handler, URL url) {
    
    
        // 会生成一个线程池
        super(handler, url);
    }
	//连接完成处理
    @Override
    public void connected(Channel channel) throws RemotingException {
    
    
        ExecutorService executor = getExecutorService();
        try {
    
    
            executor.execute(new ChannelEventRunnable(channel, handler, ChannelState.CONNECTED));
        } catch (Throwable t) {
    
    
            throw new ExecutionException("connect event", channel, getClass() + " error when process connected event .", t);
        }
    }
	//连接断开处理
    @Override
    public void disconnected(Channel channel) throws RemotingException {
    
    
        ExecutorService executor = getExecutorService();
        try {
    
    
            executor.execute(new ChannelEventRunnable(channel, handler, ChannelState.DISCONNECTED));
        } catch (Throwable t) {
    
    
            throw new ExecutionException("disconnect event", channel, getClass() + " error when process disconnected event .", t);
        }
    }

    @Override
    public void received(Channel channel, Object message) throws RemotingException {
    
    
        ExecutorService executor = getExecutorService();
        try {
    
    
            // 交给线程池去处理message
            executor.execute(new ChannelEventRunnable(channel, handler, ChannelState.RECEIVED, message));
        } catch (Throwable t) {
    
    
            //TODO A temporary solution to the problem that the exception information can not be sent to the opposite end after the thread pool is full. Need a refactoring
            //fix The thread pool is full, refuses to call, does not return, and causes the consumer to wait for time out
        	if(message instanceof Request && t instanceof RejectedExecutionException){
    
    
        		Request request = (Request)message;
        		if(request.isTwoWay()){
    
    
        			String msg = "Server side(" + url.getIp() + "," + url.getPort() + ") threadpool is exhausted ,detail msg:" + t.getMessage();
        			Response response = new Response(request.getId(), request.getVersion());
        			response.setStatus(Response.SERVER_THREADPOOL_EXHAUSTED_ERROR);
        			response.setErrorMessage(msg);
        			channel.send(response);
        			return;
        		}
        	}
            throw new ExecutionException(message, channel, getClass() + " error when process received event .", t);
        }
    }
	//异常处理;
    @Override
    public void caught(Channel channel, Throwable exception) throws RemotingException {
    
    
        ExecutorService executor = getExecutorService();
        try {
    
    
            executor.execute(new ChannelEventRunnable(channel, handler, ChannelState.CAUGHT, exception));
        } catch (Throwable t) {
    
    
            throw new ExecutionException("caught event", channel, getClass() + " error when process caught event .", t);
        }
    }
}

WrappedChannelHandler

This class is the parent class of AllChannelHandler;
the construction method workflow:

  1. Give the DecodeHandler instance to the attribute handler
  2. Through the SPI mechanism, obtain a fixed thread pool; the default extension class of the SPI thread pool used is FixedThreadPool,
  3. Determine the value of componentKey, if it is a consumer, componentKey is consumer, if it is a service provider, it is java.util.concurrent.ExecutorService;
  4. Obtain a DataStore through the SPI mechanism; the SimpleDataStore#put method is used; the internal structure is a Map<String, Map>, which is a two-layer Map structure. The Key of the first layer Map is componentKey, and the second layer map The key is the port number of the service, and the value is a thread pool created by 2;

public class WrappedChannelHandler implements ChannelHandlerDelegate {
    
    

    protected static final Logger logger = LoggerFactory.getLogger(WrappedChannelHandler.class);

    protected static final ExecutorService SHARED_EXECUTOR = Executors.newCachedThreadPool(new NamedThreadFactory("DubboSharedHandler", true));

    protected final ExecutorService executor;

    protected final ChannelHandler handler;

    protected final URL url;

    public WrappedChannelHandler(ChannelHandler handler, URL url) {
    
    
        this.handler = handler;
        this.url = url;
        executor = (ExecutorService) ExtensionLoader.getExtensionLoader(ThreadPool.class).getAdaptiveExtension().getExecutor(url);

        String componentKey = Constants.EXECUTOR_SERVICE_COMPONENT_KEY;
        if (CONSUMER_SIDE.equalsIgnoreCase(url.getParameter(SIDE_KEY))) {
    
    
            componentKey = CONSUMER_SIDE;
        }

        // DataStore底层就是一个map,存储的格式是这样的:{"java.util.concurrent.ExecutorService":{"20880":executor}}
        // 这里记录了干嘛?应该是在请求处理的时候会用到
        DataStore dataStore = ExtensionLoader.getExtensionLoader(DataStore.class).getDefaultExtension();
        dataStore.put(componentKey, Integer.toString(url.getPort()), executor);
    }

}

FixedThreadPool

Create a thread pool;

public class FixedThreadPool implements ThreadPool {
    
    

    @Override
    public Executor getExecutor(URL url) {
    
    
        String name = url.getParameter(THREAD_NAME_KEY, DEFAULT_THREAD_NAME);
        int threads = url.getParameter(THREADS_KEY, DEFAULT_THREADS);
        int queues = url.getParameter(QUEUES_KEY, DEFAULT_QUEUES);
        return new ThreadPoolExecutor(threads, threads, 0, TimeUnit.MILLISECONDS,
                queues == 0 ? new SynchronousQueue<Runnable>() :
                        (queues < 0 ? new LinkedBlockingQueue<Runnable>()
                                : new LinkedBlockingQueue<Runnable>(queues)),
                new NamedInternalThreadFactory(name, true), new AbortPolicyWithReport(name, url));
    }

}

SimpleDataStore

There is a data attribute inside the DataStore, which is a Map<String,Map> structure;



public class SimpleDataStore implements DataStore {
    
    

    // <component name or id, <data-name, data-value>>
    private ConcurrentMap<String, ConcurrentMap<String, Object>> data =
            new ConcurrentHashMap<String, ConcurrentMap<String, Object>>();

    @Override
    public Map<String, Object> get(String componentName) {
    
    
        ConcurrentMap<String, Object> value = data.get(componentName);
        if (value == null) {
    
    
            return new HashMap<String, Object>();
        }

        return new HashMap<String, Object>(value);
    }

    @Override
    public Object get(String componentName, String key) {
    
    
        if (!data.containsKey(componentName)) {
    
    
            return null;
        }
        return data.get(componentName).get(key);
    }

    @Override
    public void put(String componentName, String key, Object value) {
    
    
        Map<String, Object> componentData = data.get(componentName);
        if (null == componentData) {
    
    
            data.putIfAbsent(componentName, new ConcurrentHashMap<String, Object>());
            componentData = data.get(componentName);
        }
        componentData.put(key, value);
    }

    @Override
    public void remove(String componentName, String key) {
    
    
        if (!data.containsKey(componentName)) {
    
    
            return;
        }
        data.get(componentName).remove(key);
    }

}

Thread pool involved in Dubbo

As a server, there are several thread pools

  1. Netty is responsible for handling connection event service thread pool BossGroup;
  2. Netty is responsible for processing the worker thread pool workerGroup of read and write events;
  3. The business thread pool responsible for processing business, that is, the thread pool corresponding to AllChannelHandler;

Dubbo's threading model

insert image description here

  1. If it is an accept event, it will be processed by Netty's BossGroup service thread pool. After processing, the channel channel information will be registered in the work thread pool workGroup intermediate worker
  2. If it is a read/write event, it will be processed by Netty's WorkerGroup worker thread pool, which becomes the IO thread pool in Dubbo, because it is responsible for processing network IO requests.
  3. The worker thread pool calls AllChannelHandler's received data event processing, and will create an instance of channelEventRunnable with the current channel, time type, and request data parameters, and hand it over to the business thread pool for processing; the worker thread will return to process other reads and writes event;
  • Why create a business thread pool to handle requests in this way?
  • Because the number of threads in the worker thread pool is generally the number of CPU cores + 1, and if IO threads are used to process business, if each IO thread executes time-consuming business, the entire server/consumer will be in a blocked state, short The service is unavailable during the time. Therefore, the business processing of the IO thread is handed over to Dubbo's business thread pool for processing, and the IO thread returns directly to process new read and write request events; this can improve the performance of the system; therefore, in Dubbo, if the thread model is not specified, usually The thread model of All type is used, that is, AllchennelHandler will be created to create a business thread pool;

Other threading models of Dubbo

  • All
    All messages are dispatched to the thread pool, including requests, responses, connection events, disconnection events, heartbeats, etc.
  • direct
    All messages are processed directly on the IO thread, that is, the IO thread is used to process the business;
  • message
    only the request response message is dispatched to the thread pool, and other connection disconnection events, heartbeat and other messages are executed directly on the IO thread
  • execution
    only sends request messages to the thread pool, excluding responses, responses and other connection disconnection events, heartbeat and other messages, and is executed directly on the IO thread.
  • Connection
    is on the IO thread, puts the connection disconnection event into the queue, executes them one by one in an orderly manner, and dispatches other messages to the thread pool.

Corresponding centralized thread pool: all are implementation classes of the ThreadPool interface;

  • fixed
    fixed-size thread pool, threads are created at startup, not closed, and held all the time.
    Correspondingly, the thread pool created by FixedThreadPool is used;
  • The cached
    thread pool is automatically deleted when it is idle for one minute, and it will be rebuilt when needed. Correspondingly,
    the thread pool created by CachedThreadPool is used.
  • The limited
    scalable thread pool, but the number of threads in the pool will only grow and not shrink. The purpose of only growing and not shrinking is to avoid performance problems caused by sudden large traffic when shrinking.
    Correspondingly, the thread pool created by LimitedThreadPool is used
  • Eager
    gives priority to creating a Worker thread pool. When the number of tasks is greater than corePoolSize but less than maximumPoolSize, workers are created first to process tasks. When the number of tasks is greater than maximumPoolSize, put tasks into the blocking queue. Throws RejectedExecutionException when the blocking queue is full. (Compared to cached:cached directly throws an exception when the number of tasks exceeds maximumPoolSize instead of putting tasks into a blocking queue) the
    corresponding thread pool created by EagertThreadPool is used

Guess you like

Origin blog.csdn.net/yaoyaochengxian/article/details/124230156