Matters needing attention in using Dubbo

I. Introduction

As a high-performance RPC framework, Dubbo has entered the Apache Egger project. Although the official user manual for dubbo is given, most of them are generalized. When using dubbo, try to understand the source code as much as possible, otherwise it will be easy to get into the pit.

2. The Service Consumer ReferenceConfig needs to be cached by itself

The ReferenceConfig instance is a very heavy instance. Each ReferenceConfig instance maintains a long chain with the service registry, and maintains a long chain with all service providers. Assuming that there is a service registry and N service providers, each ReferenceConfig instance maintains N+1 long chains. If ReferenceConfig instances are generated frequently, it may cause performance problems and even the risk of memory or connection leakage. Especially when programming with dubbo api, it is easy to ignore this problem.

In order to solve this problem, it was self-cached before, but since dubbo 2.4.0 version, dubbo provides a simple tool class ReferenceConfigCache for caching ReferenceConfig instances. Use as follows:

//创建服务消费实例
ReferenceConfig<XxxService> reference = new ReferenceConfig<XxxService>();
reference.setInterface(XxxService.class);
reference.setVersion("1.0.0");
......
//获取dubbo提供的缓存
ReferenceConfigCache cache = ReferenceConfigCache.getCache();
// cache.get方法中会缓存 reference对象,并且调用reference.get方法启动ReferenceConfig,并返回经过代理后的服务接口的对象
XxxService xxxService = cache.get(reference);

// 使用xxxService对象
xxxService.sayHello();

It should be noted that the reference of the ReferenceConfig object is held in the Cache. Do not call the destroy method of the ReferenceConfig externally, which will cause the ReferenceConfig in the Cache to be invalid!

If you want to destroy the ReferenceConfig in the Cache, the ReferenceConfig will be destroyed and the corresponding resources will be released. Specifically, use the following methods to destroy:

ReferenceConfigCache cache = ReferenceConfigCache.getCache();
cache.destroy(reference);

In addition, use the service group, interface, and version as the cached key, and the ReferenceConfig instance as the corresponding value. If you need to use a custom key, you can call the ReferenceConfigCache cache = ReferenceConfigCache.getCache(keyGenerator ); method when creating the cache to pass the custom keyGenerator.

3. Concurrency control

3.1 Service Consumer Concurrency Control

For concurrency control in the service consumption method, the actives parameter needs to be set, as follows:

<dubbo:reference id="userService" interface="com.test.UserServiceBo"
        group="dubbo" version="1.0.0" timeout="3000" actives="10"/>

Set all methods in the com.test.UserServiceBo interface, and each method can request up to 10 concurrent requests at the same time.

You can also use the following method to set the number of concurrent requests for a single method in the interface, as follows:


    <dubbo:reference id="userService" interface="com.test.UserServiceBo"
        group="dubbo" version="1.0.0" timeout="3000">
                <dubbo:method name="sayHello" actives="10" />
    </dubbo:reference>

As above, the maximum number of concurrent requests of the sayHello method is set to 10. If the client requests more than 10 concurrent requests for this method, the client will be blocked. When the number of concurrent requests from the client is less than 10, the request will be sent to the service provider. party server. In dubbo, the client concurrency control is controlled by the ActiveLimitFilter filter. The code is as follows:

public class ActiveLimitFilter implements Filter {

    public Result invoke(Invoker<?> invoker, Invocation invocation) throws RpcException {
        URL url = invoker.getUrl();
        String methodName = invocation.getMethodName();
        //获取设置的acvites的值,默认为0
        int max = invoker.getUrl().getMethodParameter(methodName, Constants.ACTIVES_KEY, 0);
        //获取当前方法目前并发请求数量
        RpcStatus count = RpcStatus.getStatus(invoker.getUrl(), invocation.getMethodName());
        if (max > 0) {//说明设置了actives变量
            long timeout = invoker.getUrl().getMethodParameter(invocation.getMethodName(), Constants.TIMEOUT_KEY, 0);
            long start = System.currentTimeMillis();
            long remain = timeout;
            int active = count.getActive();
            //如果该方法并发请求数量大于设置值,则挂起当前线程。
            if (active >= max) {
                synchronized (count) {
                    while ((active = count.getActive()) >= max) {
                        try {
                            count.wait(remain);
                        } catch (InterruptedException e) {
                        }
                        //如果等待时间超时,则抛出异常
                        long elapsed = System.currentTimeMillis() - start;
                        remain = timeout - elapsed;
                        if (remain <= 0) {
                            throw new RpcException("Waiting concurrent invoke timeout in client-side for service:  "
                                    + invoker.getInterface().getName() + ", method: "
                                    + invocation.getMethodName() + ", elapsed: " + elapsed
                                    + ", timeout: " + timeout + ". concurrent invokes: " + active
                                    + ". max concurrent invoke limit: " + max);
                        }
                    }
                }
            }
        }
        //没有限流时候,正常调用
        try {
            long begin = System.currentTimeMillis();
            RpcStatus.beginCount(url, methodName);
            try {
                Result result = invoker.invoke(invocation);
                RpcStatus.endCount(url, methodName, System.currentTimeMillis() - begin, true);
                return result;
            } catch (RuntimeException t) {
                RpcStatus.endCount(url, methodName, System.currentTimeMillis() - begin, false);
                throw t;
            }
        } finally {
            if (max > 0) {
                synchronized (count) {
                    count.notify();
                }
            }
        }
    }

}

It can be seen that the client concurrency control is that if the current client request thread will be suspended when the concurrency reaches the specified value, if the number of concurrent requests decreases during the waiting timeout period, the blocked thread will be activated, and then the request will be sent to the service The provider, if the wait times out, throws an exception directly. At this time, the service is not sent to the service provider server at all.

4. Improved broadcasting strategy

Earlier, when we explained cluster fault tolerance, we talked about a broadcast strategy. This strategy is mainly used to broadcast messages to all service providers. Then there is a problem to think about. Broadcasting means that you call the interface once on the client, and the internal polling is Call the services of all service providers' machines, then you call this interface once, what is the return value? For example, 10 machines are polled internally, and each machine should have a return value, so the return value of your call this time is the composition of 10 return values? Actually not, what is returned is the last machine result of the polling call, so what if we want to aggregate the results returned by all the machines? More things to watch out for in using dubbo Click me to see the article , click me to watch the video to know.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325774080&siteId=291194637