Nacos related technology stack

Nacos

Nacos implements dynamic service discovery , configuration management , and service management platform in microservices , and some other components are also used in microservices, such as:

  • LoadBalancer ==> achieve load balancing
  • openfeign ==> Realize the mutual calling of microservices
  • Sentinel ==> Realize service fusing and current limiting
  • Seata ==> implement distributed transaction rollback

Microservice refers to the completion of one or a group of independent functions, so that it can be reused by multiple clients. Nacos can be used as a registration center and configuration center in microservice components. Generally, some configuration information and some data information are stored in Nacos.

The core principles of the registration center and the configuration center , and the main ways of synchronizing information:

  • push (the server actively pushes)
  • pull (client polling), the timeout is relatively short
  • long pull long polling (longer timeout)

CAP

C (consistency), A (availability), P (partition tolerance)

insert image description here

AP: Meet availability and partition fault tolerance

When a network partition occurs, in order to ensure availability, system B can return the old value to ensure system availability.

The data can be inconsistent for a short time, but it needs to be consistent in the end. In any case, the availability of the service must be guaranteed.

The AP mode is a temporary instance. By default, after the service is started, a " heartbeat packet " will be sent to nacos every 5 seconds. This heartbeat packet contains the basic information of the current service. If Nacos receives this "heartbeat packet" and finds that the service information is not in the registration list, it will register. If the service information is in the registration list, it indicates that the service is still healthy.

The client notifies the health status of the nacos registration center through the heartbeat report (the default heartbeat interval is 5s, nacos will set the instance that has not received the heartbeat for more than 15s as unhealthy, and delete the instance if it exceeds 30s)

This feature is suitable for scenarios that need to deal with sudden traffic increases. The service can be elastically expanded. When the traffic passes, the service can be automatically logged out when it stops.

CP: Satisfy consistency and partition fault tolerance

When a network partition occurs, in order to ensure consistency, requests must be rejected, otherwise consistency cannot be guaranteed.

Our services may not be available, but we must ensure data consistency.

The CP mode is a permanent instance, which is registered with nacos at startup, and nacos will persist the instance. Nacos actively checks the health status of the client (the default time interval is 20s, and will be set as unhealthy if the health check fails, and will not be deleted immediately). Only the main business of the project will be set as a permanent instance.

Mutual conversion between CP and AP modes

curl -X PUT "$NACOS_SERVER:8848/nacos/v1/ns/operator/switches?entry=serverMode&value=CP"

curl -X PUT "$NACOS_SERVER:8848/nacos/v1/ns/operator/switches?entry=serverMode&value=AP"

Successful example:

insert image description here

The following options need to be configured to indicate that the registration is a temporary/permanent instance.
AP mode does not support data consistency, so only temporary instances of service registration are supported. CP mode supports permanent instances of service registration to meet the consistency of configuration files

spring: 
  cloud:
    nacos:
      discovery:
        # 示例类型(true为临时实例,false为永久实例 ==> CP模式)
        ephemeral: true

tip:

  • This cannot be changed casually, it is recommended to keep the default AP.
  • All services in the cluster environment must be switched
  • You can use postman to simulate, you must use put request. Both get and post are invalid

Nacos as configuration center

Analysis of configuration center principle

Nacos configuration center adopts: client long polling method

insert image description here

  • The Nacos client will cyclically request the data changed by the server, and the timeout period is set to 30s. When the configuration changes, the response to the request will be returned immediately, otherwise it will wait until 29.5s+ before returning the response

  • After the client's request arrives at the server, the server adds the request to a queue called allSubs, waits for the DataChangeTask to trigger when the configuration changes, and writes the changed data into the response object.

  • At the same time, the server also encapsulates the request into a scheduling task to execute. During the waiting period, the DataChangeTask is actively triggered. If the delay time expires and the DataChangeTask is not triggered, the scheduling task starts to perform data change checks, and then writes the check results into the response object (file-based MD5)

example

  1. rely:
		<!-- nacos配置中心 -->
        <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
        </dependency>
		<!-- cloud新版本默认将bootstrap移除了,所以需要添加如下依赖 -->
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-bootstrap</artifactId>
        </dependency>
  1. Create a new namespace in Nacos and create a new configuration in the namespace to store data with key/value

  2. Configure in the Bootstrap.yml configuration file

    spring:
      application:
        name: nacos_config_test
      cloud:
        nacos:
          config:
            # 配置中心地址
            server-addr: ${
          
          NACOS_HOST:moonquakes.club}:${
          
          NACOS_PORT:18848}
            namespace: ${
          
          NACOS_NAMESPACE:d1733b6f-fb2a-4b55-ba16-742ee239be55}
            group: ${
          
          NACOS_GROUP:group1}
            # 默认properties
            file-extension: yml
            # Nacos共享配置文件
            shared-configs[0]:
              data-id: application.yml
              group: ${
          
          NACOS_GROUP:group1}
              refresh: true
            shared-configs[1]:
              data-id: redis.yml
              group: ${
          
          NACOS_GROUP:group1}
              refresh: true
            shared-configs[2]:
              data-id: rabbitmq.yml
              group: ${
          
          NACOS_GROUP:group1}
              refresh: true
    
  3. ${NACOS_USERNAME:XXX} This way of writing is that if you change the configuration, you can directly add parameters at startup to change the configuration uniformly, and the default value is after the colon

  4. shared-configs is the shared configuration of the project, and some unified information will be configured to achieve Nacos configuration to flexibly change the unified configuration of the project

    Note:

  • The startup sequence of the bootstrap.yml file is higher than that of the application.yml configuration file, but the new version needs to add annotations , otherwise the configuration of bootstrap.yml will not take effect.

  • Dynamic refresh can be realized by using @RefreshScope annotation: when the configuration is read in nacos, the latest data can be read without restarting the project after modifying the configuration

  • The spring.application.name=XXX of the configuration file will automatically read the XXX.yml (default properties) configuration file in Nacos

    In Nacos Spring Cloud, dataIdthe complete format is as follows:

    ${
          
          prefix}-${
          
          spring.profiles.active}.${
          
          file-extension}
    
    • prefixThe default spring.application.namevalue is , and it can also be configured through configuration items spring.cloud.nacos.config.prefix.

    • spring.profiles.activeIt is the profile corresponding to the current environment. Note: When spring.profiles.activeis empty, the corresponding connector -will also not exist, and the splicing format of dataId becomes${prefix}.${file-extension}

    • file-exetensionTo configure the data format of the content, it can be spring.cloud.nacos.config.file-extensionconfigured through configuration items. Currently only the propertiesand yamltypes are supported.

      Found the current <spring-cloud-alibaba.version>2021.0.1.0</spring-cloud-alibaba.version> version, when you configure active and extension, three files will be read: 1. prefix, 2. {prefix}, 2.prefix2.{prefix}. f i l e − e x t e n s i o n 、 3. {file-extension}、3. fileextension3.{prefix}- s p r i n g . p r o f i l e s . a c t i v e . {spring.profiles.active}. spring.profiles.active.{file-extension}

Nacos as a service center

Registry principle

The nacos registration center adopts: pull (client polling) and push (server active push)

insert image description here

  • When the client starts, it will encapsulate the current service information including ip, port number, service name, cluster name and other information into an Instance object, and then create a scheduled task to send PUT requests to the Nacos server with relevant information at regular intervals.

  • After receiving the heartbeat request, the nacos server will check whether the instance exists in the current service list. If not, it will re-register the current service instance. After the registration is completed, an asynchronous task will be started immediately to update the last heartbeat time of the client instance. If the current instance is in an unhealthy state, it will be changed to a healthy state.

  • After the heartbeat timing task is created, register the current service instance information into the nacos server through a POST request.

  • After the nacos server receives the registration instance request, it will encapsulate the data carried by the request into an Instance object, and then create a service Service for this service instance. There may be multiple service instances under one Service, and the service will be saved in a ConcurrentHashMap in Nacos Map(namespace,Map(group::serviceName, Service)); .

  • When nacos adds an instance to the corresponding service list, it will adopt different protocols according to different modes of AP and CP.

    • The CP mode is based on the Raft protocol (the instance data is updated to memory and disk files through the leader node, and a simple raft write data logic is implemented through CountDownLatch, and more than half of the nodes in the cluster must write successfully to return success to the client)
    • The AP mode is based on the Distro protocol (add a local service instance to the task blocking queue to change the task, update the local service list, and then traverse all nodes in the cluster, create data synchronization tasks and put them into the blocking queue for asynchronous cluster data synchronization, and return without guaranteeing that the cluster node data synchronization is completed)
    • When nacos updates the service instance to the service registry, in order to prevent concurrent read and write conflicts, it adopts the idea of ​​copy-on-write, copies the original registry data, and replaces it with the real registry after adding.
  • After the update is completed, nacos notifies the client of the service change by publishing the service change event. It uses UDP communication. After receiving the UDP message, the client will return an ACK signal. If the server does not receive the ACK signal within a certain period of time, it will try to resend. When the resend time expires, it will not resend.

  • The client regularly pulls the service data from the server through the scheduled task and saves it in the local cache.
    The server will trigger a push event when heartbeat detection, service list changes, or health status changes occur. In the push event, the service list will be pushed to the client based on UDP communication. Although the reliable arrival of messages cannot be guaranteed through UDP communication, the Nacos client will start scheduled tasks and update the service list cached by the client at regular intervals. The service list is updated through regular polling as a cover, so there is no need to worry about the situation that the data will not be updated, which not only ensures real-time performance, but also ensures the reliability of data updates.

provider

  1. Add service discovery dependencies:
        <!-- nacos服务注册与发现 -->
        <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
        </dependency>
  1. In the bootstrap.yml configuration class

    spring:
      application:
        name: nacos-stock
      profiles:
        # 环境配置
        active: ${
          
          SPRING_PROFILES_ACTIVE:dev}
      cloud:
        nacos:
          discovery:
            # 服务注册地址
            server-addr: ${
          
          NACOS_HOST:moonquakes.club}:${
          
          NACOS_PORT:18848}
            namespace: ${
          
          NACOS_NAMESPACE:d1733b6f-fb2a-4b55-ba16-742ee239be55}
            group: ${
          
          NACOS_GROUP:group1}
    server:
      port: 8080
    
  2. Add the @EnableDiscoveryClient annotation to the main startup class to enable the service registration discovery function. The registered service name is: ${spring.application.name}

Service call (consumer)

  1. openfeign, loadbalancer dependencies:
    <!-- nacos服务注册与发现 移除ribbon支持 -->
    <dependency>
        <groupId>com.alibaba.cloud</groupId>
        <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
        <exclusions>
            <exclusion>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <!-- 微服务调用openfeign -->
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-openfeign</artifactId>
        <!-- 不使用Ribbon 进行客户端负载均衡 -->
        <exclusions>
            <exclusion>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <!-- 微服务:负载均衡-->
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-loadbalancer</artifactId>
    </dependency>

The caller needs to configure: client.XXXClient, add annotations to the main configuration class: @EnableFeignClients

@FeignClient(value = XXXConstant.SERVICE_NAME) // 被调用微服务的注册服务名
public interface XXXClient {
    
    

    @GetMapping("/getAll")
    public String test(@RequestParam("info") String info);	// 变量需要加注解@RequestParam
}

LoadBalancer load balancing

Spring Cloud LoadBalancer is the official client load balancer provided by Spring Cloud itself, which is used to replace Ribbon. The project uses RestTemplate to integrate LoadBalance: only need to import dependencies and cooperate with openfeign to directly use polling to call microservices.

		<!-- 提供了RestTemplate支持 -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <!-- nacos服务注册与发现 移除ribbon支持 -->
        <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
            <exclusions>
                <exclusion>
                    <groupId>org.springframework.cloud</groupId>
                    <artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

		<!-- LoadBalancer -->
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-loadbalancer</artifactId>
        </dependency>

Loadbalancer implements the following two load balancing strategies by default:

  • RandomLoadBalancer - random allocation strategy
  • (default) RoundRobinLoadBalancer - round robin allocation strategy
  1. Polling algorithm (default) : Start multiple services and configure program startup parameters. The default is polling algorithm, and the started services are called in sequence, and the order remains unchanged

[External link image transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the image and upload it directly (img-N1iL4yZ3-1675824953984)(C:/Users/Administrator/Desktop/imgs/image-20230207162257214.png)]

  1. Random polling algorithm : You need to configure the RandomLoadBalancer class implemented by Loadbalancer and add corresponding annotations to the main startup class

MyLoadBalancerConfig configuration class:

@Configuration
public class MyLoadBalancerConfig {
    
    
    @Bean
    ReactorLoadBalancer<ServiceInstance> randomLoadBalancer(Environment environment,
                                                            LoadBalancerClientFactory loadBalancerClientFactory) {
    
    
        String name = environment.getProperty(LoadBalancerClientFactory.PROPERTY_NAME);
        // 随机轮询
        return new RandomLoadBalancer(loadBalancerClientFactory
                .getLazyProvider(name, ServiceInstanceListSupplier.class),
                name);
    }
}

Add annotations to the main startup class:

@LoadBalancerClients(defaultConfiguration = {
    
    MyLoadBalancerConfig.class})
  1. Weighted round robin : configure the weight of distributed services directly in nacos, and nacos will call them sequentially from large to small according to the weight of each service

[External link image transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the image and upload it directly (img-wVxYLGhU-1675824953985) (C:/Users/Administrator/Desktop/imgs/image-20230207171719284.png)]

  1. Weighted random : On the basis of weighting, configure the random allocation strategy class implemented by Loadbalancer and add corresponding annotations to the main startup class

openfeign microservice remote call

  1. Provides a load balanced http client when using Feign
    <!-- 微服务调用openfeign -->
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-openfeign</artifactId>
        <!-- 不使用Ribbon 进行客户端负载均衡 -->
        <exclusions>
            <exclusion>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
  1. The main startup class adds @EnableDiscoveryClient service discovery annotation, @EnableFeignClients enables feign call annotation
  2. Create a class that calls the microservice and add the @FeignClient annotation:
    • The value value is the name of the microservice that needs to be called
    • The path value is the unified pre-path of the microservice
    • contextId to avoid different callers, cannot have the same name
    • fallbackFactory is fault-tolerant, and can customize the error information of each interface

Guess you like

Origin blog.csdn.net/weixin_49339471/article/details/128931874