Article Directory
Spring Cloud notes
To learn a framework, the most important thing is to understand his thoughts first.
name | Spring Cloud first generation | Spring Cloud second generation |
---|---|---|
Gateway | Spring Cloud Zuul | Spring Cloud Gateway |
Registry | Eureka (not updated anymore), Consul, Zk | Ali Nacos, pat and loan radar, etc. are optional |
Configuration Center | SpringCloud Config | Ali Nacos, Ctrip Apollo, Pay with Config |
Client soft load balancing | Ribbon | Spring-cloud-loadb,ancer |
Fuse | Hystrix | spring-cloud-r4j, Ali Sentinel |
The second generation of SpringCloud (self-developed) and excellent component combination:
Spring Cloud Gateway Gateway
Spring Cloud Loadbalancer Client Load Balancer
Spring Cloud r4j (Resilience4J) Service Protection
Spring Cloud Alibaba Nacos Service Registration
Spring Cloud Alibaba Nacos Distributed Configuration Center
Spring Cloud Alibaba Sentinel Service Protection Spring Cloud Alibaba
Seata Distributed Transaction Resolution Framework
Alibaba Cloud OSS
Alibaba Cloud Storage Alibaba Cloud SchedulerX Distributed Task Scheduling Platform
Alibaba Cloud SMS Distributed SMS System
Nacos Distributed Configuration Center and Distributed Service Registry
The background
generated by Nacos In the background rpc remote call, the management of the service URL
EPC (Remote Procedure Call) remote procedure call, the simple understanding is that one node requests the service provided by another node.
Rpc's remote calling framework HttpClient, gprc, dubbo, rest, openfeign, etc.
Problems in traditional rpc remote calls
- The problem of timeout The
client has sent a request to the server, and the server did not respond to the request in time to prevent the client from being blocked and waiting. - Security issues:
Encryption, https, token transmission of data, etc., current limiting, service protection, etc. - URL address management between services
In distributed and microservices, the dependencies between services are very complicated. If the interface call is hard-coded, and the interface address changes later, the address needs to be refreshed manually.
Microservice architecture communication, the dependency between services is very large. If the url address of our service is managed in a traditional way, once the address changes, it is necessary to manually modify the rpc remote call address.
Registry: In fact, it stores the calling address of the interface. IP and port number
Interface address: IP and port/interface name. The interface name generally does not change.
Well-known registry centers include: Dubbo relies on Zookeeper, Eureka, Consul, Nacos, Redis, and databases.
The biggest feature: can realize dynamic perception.
- Commonly used names for
microservice architecture design Producer: Provides an interface to be called by other services
Consumer: Calls an interface written by others
Registration center: Stores the calling interface address and dynamic perception
After downloading nacos, the startup fails:
java.lang.IllegalArgumentException: db.num is null //报错信息
It is because the default cluster mode is started, but there is only one nacos. Need to be modified to start in stand-alone mode. Modify in this startup.cmd file
rem set MODE = "cluster" %集群模式% 修改前
set MODE=“standalone” %单机模式%
Local load balancer load balancing algorithms are implemented locally,
depending on the registry.
Application scenarios: Dubbo, feign client/rpc remote call framework
Frame: client Ribbon
Server load balancer load balancing algorithms are all implemented on the server side
Depends on: nginx
scenario: tomcat load balancing
Framework: nginx | vs
OpenFeign client
OpenFeign is a web declarative Http client calling tool that provides interfaces and annotations for calling.
As a distributed configuration center, Nacos is lightweight, and its deployment and operation and maintenance learning costs are very simple.
Import dependency:
add a comment:
If it is reported that the Feign client is not found, the following annotation is missing
@EnableFeignClients OpenFeign Clients will come with a load balancer by default
Nacos Configuration Center
Principle of Distributed Configuration Center:
- Managers publish configuration files, and the configuration files need to be persisted to the hard disk
- Distributed configuration center server side, monitoring, client connection
- The local project is sent and connected to the server of the distributed configuration center, and the server of the distributed configuration center reads the configuration file and returns it to the local project (client)
- If the distributed configuration center server finds that the configuration file has changed, it will notify the local project to refresh the configuration file.
Data id = service name-version. end
Bootstrap.yml has a higher loading priority than application.yml.
The default configuration format is: file-extension:Properties
so you need to set:file-extension: yaml
Dynamic configuration properties plus this annotation
@RefreshScope
Multi-version: add version after the name
IDES: Internet Demonstration and Evaluation System Interactive demonstration and evaluation system
DEV: Development System, development system
QAS: Quality Assurance System, Quality Assurance System
UAT: User Acceptance Test User Acceptance Test
PRD: Production System, The
required version is added to the production system configuration file. The
addressing principle is based on the service name application.name-prd.yaml.
High availability
If the cluster mode is used, there is no way to use the local configuration information together,
so it can be saved in the database.
First use this sql to import the database
and then configure the data source: then restart it.
After connecting to the database, the configuration information will be stored in the database.
nacos cluster configuration
Modify the cluster.conf.example file to remove the suffix, cluster.conf plus the cluster ip and port number,
then modify the port number of each nacos
Definition of transaction
Our business logic can be submitted or rolled back to ensure data consistency.
So either commit or roll back.
- Atomicity a either commit or rollback
- Consistency c
- Isolation i When multiple transactions are executed together, they do not affect each other
- Once a persistent transaction is committed or rolled back, it will not have any impact on the result
CAP principle: The
CAP principle is also known as the CAP theorem, which refers to the consistency (Consistency), availability (Availability), and partition tolerance (Partition tolerance) in a distributed system. The CAP principle means that these three elements can only achieve two points at the same time, and it is impossible to take care of all three.
Mainstream: What are the differences between Nacos, Eureka, and Zookeeper registry
The bottom layer of ZK adopts the zab protocol to ensure the data consistency model between each node: cp guarantees data consistency. If our zk is in the election process, the environment of the zk cluster may be temporarily inaccessible, or the client may not be able to read the latest For the interface address of the zk cluster, the available nodes of the zk cluster must meet more than half of the machine quality. If the more than half of the machine quality is not met, the entire zk cluster node is temporarily inaccessible, the purpose is to ensure the problem of data consistency.
In zk, the entire cluster must elect a leader and slave node, without realizing centralized thinking.
In the case of a registered center: ap Eureka is recommended
Centralized thought election leaders and decentralized thoughts from nodes
are equal for everyone
Nacos 1.0 version began to support cp and ap, the default is ap mode,
Eureka supports ap to ensure availability,
Zk cp to ensure data consistency
ap: guarantee availability
cp: guarantee data consistency
The zookeeper election mechanism:
- Determine whether zxid is equal
- Comparing myid
The zk centralized cluster election leader (leader) and slave node (follower3) can only have one leader role and multiple slave nodes.