Implementation of Nacos-service multi-level storage mode & cluster & load balancing with priority local cluster access

Before we had the concept of service, we provided the userservice service for user query, and the orderservice service for order query. Then, userservice also deploys multiple instances. We have 8081, 8082, and 8083. These three are all instances of userservice, so we have two layers of concepts before, one layer is service, and the second layer is instance . A service can contain multiple instances. However, as the scale of our business expands, we will consider more issues. For example, in our current situation, we deploy all our strengths in one place, just like you put all your eggs in one basket. If one day you accidentally overturn the basket, wouldn’t all the eggs be broken? Then your computer room is placed in the same place there. If there is a problem with natural disasters and man-made disasters, the whole service will be over? So, in order to solve this problem, we will deploy multiple instances of one service to multiple computer rooms, especially those with deep pockets like Ali Jingdong. Hey, I will set up one for all parts of the country, Shanghai, Hangzhou, and Beijing. These two are sent to Hangzhou, these two are sent to Shanghai, and a few more are thrown to Beijing. It’s like we scatter the eggs. If one is beaten, there are still several columns, right? In this way, a kind of disaster recovery can be achieved. And our service hierarchical storage model introduces the concept of such a computer room, or the concept of region, and calls multiple instances in the same computer room a cluster. For example, the two userservice instances in the computer room in Hangzhou are called the userservice cluster in Hangzhou, and the userservice instance in Beijing is called the userservice cluster in Beijing. So, in the Nacos service classification model, the first level is the service, the next level is the cluster, and the next level is the instance.

Then why does Nacos introduce such a service classification? Is it not good for me to directly serve and find an instance? Why add a concept of geographically divided clusters? Let's imagine such a situation. For example, I have a place in Hangzhou, which has an orderservice cluster and a userservice cluster. Then, I have a place in Shanghai with the same configuration. In the future, there will be a place in Guangdong. This Beijing place and so on. Now, my orderservice needs to access userservice, so it has two options, one is to access it in its own local area network, and the other is to access it in an external computer room. Which do you think he should choose? Needless to say, I must choose the local area, because you don’t want to pick wild flowers outside. Of course, there are more important reasons. The access in our LAN has a relatively short distance, so the speed is relatively fast and the delay is relatively low, and you access across this cluster, for example, if you access from Hangzhou If you go to Guangzhou or Beijing, it will reach hundreds of kilometers or thousands of kilometers, so the delay at this time is very high. Therefore, when calling the service, you should try to access the local cluster as much as possible. Only when the local cluster is unavailable, We will consider visiting its cluster. Our Nacos introduces this training concept, in fact, to prevent cross-cluster calls and avoid them as much as possible.

The following practice:

Demonstration (the ports 2345, 2346, and 2347 demonstrated below are regarded as 8081, 8082, and 8083, and elvesservice can be regarded as userservice, because I am actually operating in my own project):

Then go to the Nacos console to see that there are two clusters and their respective instances

If orderservice wants to access the two instance services of userservice in the local cluster HZ, first you need to register orderservice to the HZ cluster (the same way as above)

After registration, go to the Nacos console to check:

At this time, if the orderservice visits the userservice service three times, all three instances of the userservice will be accessed, because the polling method is still used. The rules for selecting instances for services are determined by the rules of load balancing, that is, Irule. Since Irule has not been configured, it is still the default polling method. Therefore, in order to implement this load balancing rule that prioritizes access to the same cluster, we must modify the load balancing.

 In the yml configuration file of orderservice:

userservice: # 要做配置的微服务名称
  ribbon:
    NFLoadBalancerRuleClassName: com.alibaba.cloud.nacos.ribbon.NacosRule  # 负载均衡规矩(优先同集群访问)

After this, orderservice visits userservice service multiple times, and you will find that the instances of 8081 and 8082 are all accessed (in the console of idea, you can see the logs of each instance, and 8083 is not accessed log output).

  • Then there is a question, what is the ratio of the two instances of 8081 and 8082 in the same local cluster being accessed?

    • In fact, it is random. A feature of NacosRule is to use the same local cluster first, and then use a random method to load balance among multiple services in the cluster.

  • Then there is another problem, if we stop the userservice services of the same cluster of orderservice, so that only the userservice instances in the SH of different clusters are left, what will happen when we access userservice through oerservice at this time?

 At this time, when orderservice accesses userservice, it will access across clusters, that is, access to the instance of port 8083. idea, look at the log. We found that this request was undertaken by 8083, so there will be a warning message in the order, indicating that a cross-cluster access has occurred, who is it? It is userservice, so what you want to ask is Hangzhou, but, in fact, you are visiting Shanghai. Then through this, we know that NacosRule actually gives priority to accessing the local area. If there is no local area, it will access across borders. When accessing across domains, it will have a warning. If our operation and maintenance personnel see such a warning message in the future , you can clearly know what the problem is, then he will restart our suspended service in time, so that the problem can be solved

Guess you like

Origin blog.csdn.net/QRLYLETITBE/article/details/128933558