Spring Cloud Alibaba [Dynamic Refresh of Nacos Configuration, Introduction of Nacos Cluster Architecture, Data Persistence of Nacos, Understanding Distributed Traffic Protection] (5)

 

Table of contents

Distributed configuration center_Nacos configuration dynamic refresh

Distributed configuration center_Dubbo service docking distributed configuration center

Distributed configuration center_Nacos cluster architecture introduction 

Data Persistence of Distributed Configuration Center_Nacos

 Distributed configuration center_Nacos cluster configuration

Distributed Traffic Protection_Understanding Distributed Traffic Protection 

Distributed Traffic Protection_Know Sentinel


 

Distributed configuration center_Nacos configuration dynamic refresh

Configure Dynamic Refresh 

The dynamic refresh of the configuration only needs to be annotated with @RefreshScope.

Annotation method

@RestController
/* 只需要在需要动态读取配置的类上添加此注解就可以 */
@RefreshScope
public class ConfigController {
    
 @Value( "${config.config}" )
 private String appName;
  
 @GetMapping("/getConfig")
 public String nacosConfingTest2(){
      return(appName);
 }
}

Real-time effect feedback

1. Nacos distributed configuration center realizes dynamic refresh through ____ annotation.

A Refresh

B RefreshScope

C Scope

D above are all wrong

Distributed configuration center_Dubbo service docking distributed configuration center

POM introduces dependencies 

       <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-bootstrap</artifactId>
        </dependency>

Console creates dataId

Create bootstrap.yml 

spring:
 main:
   allow-bean-definition-overriding: true
   allow-circular-references: true
 application:
    #应用名字
   name: consumer-dubbo-order
 cloud:
   nacos:
     config:
        # 设置应用名字
       prefix: ${spring.application.name}
        # 配置文件后缀
       file-extension: yaml
        # 配置中心地址
       server-addr: 192.168.66.101:8848
 profiles:
    # 设置环境
   active: dev

test

Request http://localhost:80/order/index

 

Distributed configuration center_Nacos cluster architecture introduction 

Why do you need to build a Nacos cluster 

By default, Nacos uses the embedded database Derby to store data. Therefore, if you start multiple Nacos nodes under the default configuration, there will be consistency problems in data storage. In order to solve this problem, Nacos adopts a centralized storage method to support cluster deployment, and currently only supports MySQL storage.

Nacos supports three deployment modes

1. Stand-alone mode - for testing and stand-alone trial.

2. Cluster mode - used in production environments to ensure high availability.

3. Multi-cluster mode - for multi-data center scenarios. 

cluster mode

Real-time effect feedback

1. Build a Nacos cluster to solve the _____ problem.

A data is inconsistent

B safe

C single point of failure

D above are all wrong 

2. The default memory database of Nacos is____.

A SQLite

B MySQL

C Derby

D above are all wrong

Data Persistence of Distributed Configuration Center_Nacos

Initialize the database 

The Nacos database script file enters the \nacos\conf directory in the compressed package when we download Nacos-server, the initialization file: nacos-mysql.sql Here I create a database named mynacos , and then execute the initialization script. After success, 11 tables will be generated;

Modify the configuration file 

Here is the need to modify the configuration file of Nacos-server. Nacos-server is actually a Java project or a Springboot project. Its configuration file is in the nacos\conf directory, named application.properties , and the data source configuration is added at the bottom of the file:

spring.datasource.platform=mysql

db.num=1
db.url.0=jdbc:mysql://127.0.0.1:3306/mynacos?
characterEncoding=utf8&connectTimeout=1000&so
cketTimeout=3000&autoReconnect=true
db.user=root
db.password=123456

Start Nacos-server and Nacos-config 

Start the Nacos-server first, and enter the Nacos console after the startup is successful. At this time, the Nacos console has a new look, and the previous data is gone.

Note: Because of the addition of a new data source, Nacos reads all configuration files from mysql, and the database we just initialized is clean, so naturally there will be no data and information to display.

Create a new configuration file DataID: nacos-config.yml in the public space (public), the configuration content is as follows: 

server:
   port: 9989
nacos:
   config: 配置文件已持久化到数据库中...

Then start the demo project in Nacos (4). After the service starts successfully, observe the Nacos console as follows

 

Verify whether it is persisted to the database 

 Distributed configuration center_Nacos cluster configuration

cluster start 

Simulate 3 machines locally through 3 ports, the ports are: 8848, 8858, 8868.

#copy3份解压后的nacos,修改各自的application.properties中的端口号,分别为:8848,8858,8868
server.port=8848
server.port=8858
server.port=8868

The cluster.conf file is placed in the respective conf directory, and the content of the file is:

192.168.66.100:8848
192.168.66.100:8858
192.168.66.100:8868

Start the Nacos service

./startup.sh

Use Nginx as load balancing to access Nacos of the cluster 

Environment installation

yum -y install gcc make automake pcre-devel
zlib zlib-devel openssl openssl-devel

Install Nginx

./configure
make && make install

Configure nginx.conf file

#定义upstream名字,下面会引用
upstream nacos{  
        #指定后端服务器地址
        server 192.168.66.100:8848;        
        server 192.168.66.100:8858;  
        server 192.168.66.100:8868;    
}
server {
  listen 80;
  server_name localhost;
  location / {
      proxy_pass http://nacos;        #引用upstream
 }
}

Restart Nginx

docker restart nginx

The load balancing of nacos access can be achieved by accessing nginx

request http://localhost/nacos

 

Distributed Traffic Protection_Understanding Distributed Traffic Protection 

In a distributed system, mutual calls between services generate distributed traffic. How to protect traffic through components and effectively control traffic is one of the technical challenges of distributed systems. 

What is Service Avalanche

Suppose I have a microservice system, which contains four microservices of ABCD, all of which are built in cluster mode.

Avalanche problem:

Microservices call each other, because a service failure in the call chain causes the entire link to be inaccessible. 

solution 

Service Protection Technology

Multiple service protection technologies are supported in Spring Cloud: 

1、Hystrix

2、Sentinel

3、Resilience4J

Sentinel Service Fault Tolerance Ideas 

Sentinel is a service fault-tolerant component of Spring Cloud Alibaba, and we often call it "anti-traffic sentinel". It is the patron saint of the core scenarios of Alibaba’s Double Eleven promotion, and has built-in rich service fault-tolerant application scenarios. It takes traffic as the entry point and achieves the purpose of maintaining service stability through various internal and external prevention and control methods.

 

Internal Exception Governance 

In Sentinel, we can handle internal exceptions by downgrading and fusing. The so-called downgrade means that when the service call has a response timeout, service exception, etc., we can execute a piece of "downgrade logic" inside the service.

 

The so-called circuit breaker means that when the amount of abnormal calls reaches a certain judgment condition, for example, when the ratio of abnormal downgrade and slow call requests reaches a threshold and the number of downgrade requests reaches a certain number within the window time, the microservice stops calling the target service for a period of time, and all incoming requests directly execute the downgrade logic. Therefore, the circuit breaker is the cumulative result of "multiple service call exceptions". 

 

 external flow control

Traffic limiting is a kind of traffic shaping and flow control scheme. In Sentinel, we can set a flow-limiting rule for each service according to the processing capacity of the cluster, and control external access traffic from the dimension of QPS or the number of concurrent threads. Once the number of visits exceeds the threshold, subsequent requests will be "fast fail", which is the most commonly used traffic limiting method.

 

Real-time effect feedback

1. The following ___ can solve the service avalanche problem.

A timeout mechanism

B fuse mechanism

C flow control

All of the above are correct 

2. Sentinel external flow control refers to _____.

A fuse

B downgraded

C current limit

D isolated

Distributed Traffic Protection_Know Sentinel

Sentinel is an open source project of Ali, which provides multiple dimensions such as flow control, circuit breaker degradation, and system load protection to ensure the stability of services.

 Key Features of Sentinel

 

 Sentinel is divided into two parts

1. Console (Dashboard): The console is mainly responsible for managing push rules, monitoring, cluster current limit allocation management, machine discovery, etc.

2. Core library (Java client): does not depend on any framework/library, can run in the runtime environment of Java 7 and above, and also has good support for Dubbo / Spring Cloud and other frameworks.

Notice:

Sentinel can be simply divided into Sentinel core library and Dashboard. The core library does not depend on Dashboard, but it can achieve the best results in combination with Dashboard. 

Who is using Sentinel?

 

Comparison of Sentinel with Hystrix and resilience4j 

Real-time effect feedback

1. The following advantages of Sentinel are ___.

A provides out-of-the-box console

B Rich circuit breaker downgrade strategy

C supports traffic shaping

All of the above are correct 

Guess you like

Origin blog.csdn.net/m0_58719994/article/details/131818284