[Hands-on] Teach you how to play with SpringCloud Alibaba's Nacos Config in-depth

1. The same configuration problem in different environments - custom Data ID configuration

In the actual development process, sometimes the configuration parameters used in the project do not need to be distinguished according to different environments, and the parameter values ​​​​used in the production, test, and development environments are the same. How to solve the problem that the same service references the same configuration in multiple environments? Nacos Config also provides a corresponding solution: through the service name + extension name, different environments and shared configuration files under the same microservice can be realized.

Add a general configuration file with Data Id nacos-config-client.yaml in the Nacos configuration center:

Add the address of Nacos configuration center in config-3377:

Add the corresponding method in Controller:

Visit http://localhost:3377/config/configCommon, you can see that the corresponding configuration has been successfully obtained:

It can be seen that the nacos-config-client.yaml configuration file does not actually have an environment suffix, that is, it can be loaded in any environment. But it should be noted that because there is no environment suffix, the priority of this general configuration file is lower than that of the specified environment configuration file. It can be seen from the startup log that the configuration file with environment is read first, and then the general configuration file without environment suffix is ​​read:

2. How to share configuration and expand configuration between different microservices

The current configuration method is the most basic configuration method, but in actual development, it will involve sharing configuration among multiple microservices. For example, the redis address, public components of the service registry, etc., then these components are shared by multiple microservices, so you can use the shared configuration method provided by Nacos Config to configure the shared configuration file:

shared-configs

Add a Redis configuration file with Data Id nacos-config-client.yaml in the Nacos configuration center:

The Data Id of the original configuration file is specified by the service name, but this configuration file is shared by multiple services, so it is not appropriate to choose any service name, and you can simply customize one.

Add the Nacos configuration center in config-3377, and specify the corresponding shared configuration file:

Entering the source code, you can see that shared-configs is a list, which means that multiple shared configuration files can be configured:

Several parameters to be specified in the shared file are also clearly written in the source code, just copy and copy:

Add the corresponding method in Controller:

Visit http://localhost:3377/config/configShared, you can see that the corresponding configuration has been successfully obtained:

Note: When multiple Data Ids are configured at the same time, the priority relationship is `spring.cloud.nacos.config.extension-configs[n].data-id` where the larger the value of n, the higher the priority. In other words, shared-configs[1] has a higher priority than shared-configs[0]. 

extension-configs

In fact, the above implementation can also be done through extension-configs. In fact, the functions are basically the same, but the semantics can be better distinguished. If you need to configure multiple configuration files on a microservice, you can use extension-configs. If you need to share multiple configuration files, you can use the shared-configs configuration method. Of course, the effects and configuration methods achieved by the two methods are basically the same. Therefore, by customizing the extended Data Id configuration, it can not only solve the problem of configuration sharing among multiple applications, but also support multiple configuration files for one application.

Just change shared-configs to extension-configs in the yml configuration file:

After restarting the service, visit http://localhost:3377/config/configShared, you can see the same effect:

Spring Cloud Alibaba Nacos Config currently provides three configuration capabilities to pull related configurations from Nacos.
- A: Support multiple shared Data Id configurations through `spring.cloud.nacos.config.shared-configs[n].data-id`;
- B: Through `spring.cloud.nacos.config.extension-configs[ The n].data-id` method supports multiple extended Data Id configurations;
- C: Automatically generate related Data Id configurations through internal related rules (application name, application name + Profile);
when the three methods are used together, One of their priority relationships is: A < B < C

3. Nacos dynamic refresh principle

What is dynamic monitoring

The so-called dynamic monitoring simply means that Nacos will automatically find those services that have been registered. In contrast, static monitoring refers to services that require specified configurations. In fact, let’s talk about the interaction between the client and the server here, which is nothing more than push and pull:
- Push: Indicates that the server actively pushes data change information to the client
    - The service needs to maintain the long connection of the client, because it needs to know the specific push The client-
    The client consumes a lot of memory, because all client connections need to be saved, and connection validity needs to be checked (heartbeat mechanism) - Pull:
Indicates that the client actively goes to the server to pull data
    - Need to pull data regularly
    - Disadvantages: Timeliness, real-time data, invalid request

Dynamic Refresh Mechanism

The dynamic refresh mechanism of Nacos adopts the advantages of push and pull and avoids the disadvantages. When Nacos is used as the configuration center, is the interaction mode of configuration data pushed by the server or pulled by the client? The Nacos client sends a request to connect to the server, and then there will be a hold period of 29.5+0.5s in the server, and then the server will put this request into the allSubs queue to wait, triggering the server to return the result There are only two types. The first is to wait for 29.5 seconds, and the configuration has not changed, then return the unchanged configuration; the second is to operate Nacos Dashboard or API to make a change to the configuration file, which will trigger the configuration change. Event, send a LocalDataEvent message. At this time, the server listens to the message, then traverses the allSubs queue, finds the ClientLongPolling task of configuration change according to the corresponding groupId, and returns it to the client through the connection.

Nacos dynamic refresh avoids the need to maintain the heartbeat connection between the two parties when the server performs a push operation on the client, and also avoids the timeliness of data when the client performs a pull operation on the server, and does not have to frequently pull data from the server. Through the preliminary understanding of the above principles, it is obvious that the answer is: the client actively pulls, and obtains configuration data through long polling (Long Polling).

short polling

Regardless of whether the configuration of the server has changed, requests are continuously initiated to obtain the configuration. For example, in the payment order scenario, the front end continuously polls the status of the order payment. It will definitely put a lot of pressure on the server. It will also cause a delay in data push. For example, a configuration is requested every 10 seconds. If the configuration is updated at 11 seconds, the push will be delayed for 9 seconds, waiting for the next request. This is short polling. In order to solve the problem of short polling The problem is that there is a long polling solution.

long polling

Long polling is not a new technology. It is actually an optimization method for reducing invalid client requests by controlling the return time of the response to client request results by the server. In fact, for the client, there is no essential difference in the use of long polling. After the client initiates a request, the server will not return the request result immediately, but will suspend the request for a period of time. If configured within this time period If the data changes, it will respond to the client immediately. If there is no change, it will wait until the specified timeout period to respond to the client, and the client will re-initiate the long link.

4. Nacos consensus protocol: Distro protocol

The Distro protocol is an AP distributed protocol self-developed by the Nacos community. It is a distributed protocol designed for temporary instances. It ensures that after some Nacos nodes go down, the entire temporary instance processing system can still work normally. As a stateful middleware application embedded protocol, Distro ensures the unified coordination and storage of registration requests by each Nacos node.

The main design idea of ​​the Distro protocol

- Each node in Nacos is equal and can process write requests, and at the same time synchronize new data to other nodes;
- Each node is only responsible for part of the data, and regularly sends the check value of the data it is responsible for to other nodes to maintain data consistency;
- Each node independently processes read requests and sends responses locally in a timely manner;

Why Nacos needs a consensus protocol

- Nacos has set a goal in open source support to reduce user deployment and operation and maintenance costs as much as possible, so that users only need one package to quickly start Nacos in stand-alone mode or in cluster mode. Nacos is a component that needs to store data. In order to achieve this goal, it is necessary to implement data storage inside Nacos. In fact, the problem is not big in a single machine, a simple embedded relational database is enough; but in the cluster mode, it is necessary to consider how to ensure the data consistency and data synchronization between each node, and to solve this problem, you have to introduce Consensus algorithm, which uses algorithms to ensure the consistency of data between nodes;
- Distro protocol is an eventual consistency protocol developed by Alibaba, and there are many eventual consistency protocols, such as Gossip (epidemic protocol ), data synchronization algorithm in Eureka. The Distro algorithm is optimized by integrating the advantages of the Gossip and Eureka protocols. For native Gossip, due to the random selection of nodes to send messages, it is inevitable that messages will be sent to the same node repeatedly, which increases the pressure of network transmission and brings additional processing load to message nodes. The Distro algorithm introduces the concept of an authoritative server, each node is responsible for a part of the data and synchronizes its own data to other nodes, which effectively reduces the problem of message redundancy;

The specific execution logic of the Distro protocol

data initialization

The newly added Distro node will pull the full amount of data. The specific operation is to poll all Distro nodes, and pull the full amount of data by sending requests to other machines. After the full pull operation is completed, all current registered non-persistent instance data is maintained on each machine of Nacos.

Data validation

After the Distro cluster is started, heartbeats are sent periodically between machines. The heartbeat information is mainly the metadata of all data on each machine (the reason why the metadata is used is to ensure that the magnitude of data transmission in the network is kept at a low level). This kind of data verification will be performed in the form of heartbeat, that is, each machine will initiate a data verification request to other machines at a fixed time interval. Once during the data verification process, a machine finds that the data on other machines is inconsistent with the local data, it will initiate a full pull request to complete the data.

write operation

For a Distro cluster that has been started, in the process of a client-initiated write operation, when a write request for a registered non-persistent instance hits a certain Nacos server, the Distro cluster processing process includes several parts:
1. The front-end Filter intercepts the request, and calculates the Distro responsible node to which it belongs according to the IP and port information contained in the request, and forwards the request to the Distro responsible node to which it belongs; 2. The Controller on the responsible node will write the
request 3.
The Distro protocol regularly executes the Sync task to synchronize all the instance information that the machine is responsible for to other nodes;

read operation

Since the full amount of data is stored on each machine, in each read operation, the Distro machine will directly pull the data from the local and respond quickly. This mechanism ensures that the Distro protocol can be used as an AP protocol to respond to read operations in a timely manner. In the case of a network partition, all read operations can also return normally; when the network is restored, each Distro node will merge and restore the data of each data fragment.

The Distro protocol is a consistency protocol developed by Nacos for temporary instance data. Its data is stored in the cache, and full data synchronization will be performed at startup, and data verification will be performed periodically. Under the design idea of ​​the Distro protocol, each Distro node can receive read and write requests. All Distro protocol request scenarios are mainly divided into three situations:
1. When the node receives a write request belonging to the instance that the node is responsible for, write directly;
2. When the node receives an instance that is not responsible for the node When a write request is made, it will be routed within the cluster and forwarded to the corresponding node to complete the read and write;
3. When the node receives any read request, it will query and return directly on the local machine (because all instances have been synchronized to on each machine);

the Distro protocol, as Nacos' built-in temporary instance consistency protocol, ensures that in a distributed environment, the status of service information on each node can be notified to other nodes in a timely manner, and can maintain hundreds of thousands of Storage and eventual consistency of level service instances .

Guess you like

Origin blog.csdn.net/FeenixOne/article/details/128089464
Recommended