Table of contents
foreword
This article is to record the notes during the learning process of Dark Horse's SpringCloud. This article is the first part of the practical article, which records the microservice architecture, Eurake registration center, Nacos registration and configuration management center, Ribbon, Feign and Gateway in detail; and Docker, MQ, ES and other service components will continue to be recorded in the second part of the practical part. Finally, thank you for reading and hope you will gain something in the end
Getting to Know Microservices
With the increase of the portable business of single projects, it is inevitable that the projects will become larger and larger, which is not conducive to the maintenance of later projects, resulting in changes in the project structure.Very bloated, highly coupled, Therefore, the current service architecture has also evolved from a single project to a distributed and microservice architecture
monolithic architecture
单体架构
: All the functions of the business are developed in one project, packaged into a package and deployed to the server
The advantage is
- simple and convenient
- easy to use
- Low operating difficulty
weakness is
- With the increase of business, the structure gradually becomes bloated
- High degree of coupling, not easy to maintain
distributed architecture
分布式架构
: The system is split according to business functions, and each business function module is developed as an independent project, called a service.
Here is a quote from Teacher Yan
The core of distributed is just one word: demolition .As long as a project is split into multiple modules and these modules are deployed separately, it is considered distributed.
分布式的拆分可以分为水平拆分和垂直拆分
split horizontally
Literally understood, horizontal splitting is splitting according to the three-tier model. The "three-tier architecture" is split into presentation layer (jsp+servlet), business logic layer (service) and data access layer (dao), and then deployed separately Integration between servers via dubbo or RPC
vertical split
Split according to business . For example, based on business logic , such as common e-commerce systems, the user module can be regarded as an independent project. Similarly, orders and chats can also be split into an independent project. ==Obviously these three split projects can still be used as independent projects. == A method of splitting like this becomes a vertical split .
Advantages and disadvantages of distributed architecture:
advantage:
- Reduce service coupling
- Conducive to service upgrade and expansion
shortcoming:
- Service call relationship is intricate
microservice architecture
Microservices can be understood as a very fine-grained vertical split . For example, the above "order item" is originally a vertically split sub-item, but in fact the "order item" can be further split into "shopping item", "settlement item" and "after-sales item".
Microservices are "tiny" services that cannot be dismantled, similar to "atomicity"
Four characteristics of microservice architecture :
- Single Responsibility: The granularity of microservice splitting is smaller, and each service corresponds to a unique business capability, so as to achieve a single responsibility
- Autonomy: independent team, independent technology, independent data, independent deployment and delivery
- Service-oriented: services provide a unified standard interface, independent of language and technology
- Strong isolation: service calls are isolated, fault-tolerant, and degraded to avoid cascading problems
The above characteristics of microservices are actually setting a standard for distributed architecture, further reducing the coupling between services, and providing independence and flexibility of services. Achieve high cohesion and low coupling.
Therefore, microservices can be considered as a distributed architecture solution with well-designed architecture
①Advantages: The split granularity is smaller, the service is more independent, and the coupling degree is lower
②Disadvantages: The structure is very complex, and the difficulty of operation and maintenance, monitoring, and deployment increases
The following figure is a diagram of a standard microservice architecture
The function and combined structure of each module are shown in the figure below
Note that microservices are not just SpringCloud
Seeing Spring Cloud for the first time
As one of the big Spring families, SpringCloud is also the most widely used microservice framework in China. SpringCloud integrates various microservice functional components, and realizes the automatic assembly of these components based on SpringBoot, thus providing a good out-of-the-box experience .
Here are some common components
The bottom layer of SpringCloud depends on SpringBoot, and there is a version compatibility relationship
Microservice Governance
The most well-known in China are SpringCloud and Alibaba's Dubbo. Later, the hottest SpringCloudAlibaba framework released by Ali is compatible with the first two service protocols (Dubbo, Feign)
Four solutions for the landing of microservice architecture
Distributed service architecture case
Now let’s demonstrate a small demo example of service splitting.
For example, now we split out the user module and the order module; the steps are as follows
①首先为两个项目建立各自的数据库,导入对应的数据(sql文件)创建表
②先创建boot主项目(如果是b站黑马过来的,直接导入资料文件夹下的demo项目),然后创建其他模块的项目
Note that the bottom layer of cloud depends on boot.Therefore, cloud and boot versions need to correspond one-to-one, this is important, the specific correspondence is as follows
For example, the following is the pom file of the main project project
Dependencies in the main project project
<dependencyManagement>
<dependencies>
<!-- springCloud -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${
spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- mysql驱动 -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>${
mysql.version}</version>
</dependency>
<!--mybatis-->
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>${
mybatis.version}</version>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
</dependencies>
Then create a sub-module project, the boot version is high, manually lower the version after creation, that is, in the parent, just manually adjust and refresh the version
The specific project structure is as follows
Then improve the sub-modules, first write a basic query business, mapper layer, service layer, controller layer, there is nothing to say, if you use the api of mp, it can even be faster.
Next, start the user-service and cloud-service projects, and then visit http://localhost:8080/order/101 with the browser, and you can see that the data of the order information has been queried
③实现远程调用
However, the user attribute in the figure above is null. This is because there is no user field in the order table. If it is a single project in the past, it will directly query the two tables. However, for microservice projects, each module is responsible for its own business. Duplication of business is not allowed, so remote calls are required
We need to initiate an http request to user-service in order-service and call http://localhost:8081/user/{userId} interface.
The approximate steps are as follows:
- Register an instance of RestTemplate to the Spring container
- Modify the queryOrderById method in the OrderService class in the order-service service, and query the User according to the userId in the Order object
- Fill the queried User into the Order object and return it together
The implementation is as follows
In the startup class in the order-service project, register the RestTemplate instance
@MapperScan("cn.order.mapper")
@SpringBootApplication
public class OrderApplication {
public static void main(String[] args) {
SpringApplication.run(OrderApplication.class, args);
}
@Bean
public RestTemplate restTemplate(){
return new RestTemplate();
}
}
Modify the queryOrderById method in the OrderService class under the service layer in the order-service service:
@Service
public class OrderService {
@Resource
private OrderMapper orderMapper;
@Resource
private RestTemplate restTemplate;
public Order queryOrderById(Long orderId) {
// 1.查询订单
Order order = orderMapper.findById(orderId);
// 2.远程查询user
// 2.1 url地址
String url = "http://localhost:8081/user" + order.getUserId();
// 2.2 发起调用
User user = restTemplate.getForObject(url, User.class);
// 3. 存入order
order.setUser(user);
// 4.返回
return order;
}
}
Then restart the two services again, and then go to the browser to access the order service, and you will find that the user is also queried
Microservice components and usage
Eureka Registry
provider and consumer
As in the above distributed case, when other module data information is needed, the corresponding module service is called remotely. In the service call relationship, there will be two different roles: Service provider: In a business, it is used by
other micro The service invoked by the service. (Provide interfaces to other microservices)
Service consumer : A service that calls other microservices in a business. (calling interfaces provided by other microservices)
The roles of service providers and service consumers are not absolute, but relative to business.
The following example is easy to understand
If service A calls service B, and service B calls service C, what is the role of service B?
- For the business of A calling B: A is a service consumer, B is a service provider
- For the business where B calls C: B is a service consumer, and C is a service provider
therefore,Service B can be either a service provider or a service consumer.
The structure and function of Eureka
If the service provider deploys multiple instances , the following problems will occur when the service is called remotely:
- When the order-service initiates a remote call, how does it know the ip address and port of the user-service instance?
- There are multiple user-service instance addresses, how to choose when calling order-service?
- How does order-service know whether a certain user-service instance is still healthy or not?
Under this cluster project structure, Eureka is obliged to come
Eureka is the registry in SpringCloud, and the most well-known registry
structure is as follows
Answer the previous questions.
Question 1: How does order-service know the address of user-service instance?
The process of obtaining address information is as follows:
- After the user-service service instance is started, register its own information with eureka-server (Eureka server). This is called service registration
- eureka-server saves the mapping relationship between service name and service instance address list
- order-service pulls the instance address list according to the service name. This is called service discovery or service pull
Question 2: How does order-service select a specific instance from multiple user-service instances?
- order-service selects an instance address from the instance list using a load balancing algorithm (such as round robin, random, weight)
- Initiate a remote call to the instance address
Question 3: How does order-service know whether a certain user-service instance is still healthy, or is it down?
- User-service will initiate a request to eureka-server every once in a while (default 30 seconds) to report its own status, which is called heartbeat
- When no heartbeat is sent for more than a certain period of time, eureka-server will consider the microservice instance to be faulty and remove the instance from the service list
- When the order-service pulls the service, the faulty instance can be ruled out
Note: A microservice can be both a service provider and a service consumer, so eureka encapsulates functions such as service registration and service discovery into the eureka-client
Build Eureka service
1. Introduce eureka dependency
Registry server: eureka-server, which must be an independent microservice (independent subproject, it is recommended to create a maven project, I have no problem if you want to use boot), just add eureka to its pom file Just rely on it, other things like version information and other dependencies are added in the parent project, it inherits the dependencies of the parent project
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
2. Write the startup class
To write a startup class for the eureka-server service, be sure to add a @EnableEurekaServer annotation to enable the registration center function of eureka:
@SpringBootApplication
@EnableEurekaServer
public class EurekaApplication {
public static void main(String[] args) {
SpringApplication.run(EurekaApplication.class, args);
}
}
3. Write configuration files
Write an application.yml file under the resource folder
server:
port: 10086
spring:
application:
name: eureka-server
eureka:
client:
service-url:
defaultZone: http://127.0.0.1:10086/eureka
4. Start the service
After the configuration is complete, you can start the service to see if the build is successful, visit http://localhost:10086/
registration service
1. Introduce dependencies
Different from the dependency of building eureka service above, this time the client is injected
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
Note: Introduce this dependency in the pom file of the service provider
2. Configuration file
In the service provider user-service, modify the application.yml file, add the service name, eureka service address
spring:
application:
name: userservice
eureka:
client:
service-url:
defaultZone: http://localhost:10086/eureka
Start multiple user-service instances (optional)
Here is to demonstrate the polling strategy remote call when a service has multiple instances, we add a SpringBoot startup configuration, and then start a user-service.
First, copy the original user-service startup configuration:
Then, in the pop-up window, make the configuration:
Now, two user-service startup configurations will appear in the SpringBoot window, the first is port 8081, and the second is port 8082.
Start the newly added user-server instance
Now visit http://localhost:10086 to see if the service has been registered successfully
service discovery
Modify the logic of order-service:Pull user-service information from eureka-server to realize service discovery.
1. Introduce dependencies
Service discovery and service registration are all encapsulated in eureka-client dependencies , so this step is consistent with service registration.
In the pom file of order-service, introduce the following eureka-client dependencies:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
2. Configuration file
Service discovery also needs to know the eureka address, so the second step is consistent with service registration, which is to configure eureka information:
In order-service, modify the application.yml file and add the service name and eureka address:
spring:
application:
name: orderservice
eureka:
client:
service-url:
defaultZone: http://127.0.0.1:10086/eureka
3. Service pull and load balancing
Finally, we are going to pull the instance list of user-service service from eureka-server and implement load balancing.
In the OrderApplication of the service consumer order-service, add a @LoadBalanced annotation to the RestTemplate Bean: polling strategy
Modify the queryOrderById method in the OrderService class in the order-service service. Modify the url path to access, and use the service name instead of ip and port :
Spring will automatically help us obtain the instance list from the eureka-server side according to the service name userservice, and then complete the load balancing.
4. Final test results
Successfully called the user-service service remotely to query the user information
Eureka registration service summary
1. Build Eureka Server
- Introduce eureka-server dependency
- Add @EnableEurekaServer annotation Configure eureka address in application.yml
2. service registration
- Introduce eureka-client dependency
- Configure the eureka address in application.yml
3. service discovery
- Introduce eureka-client dependency
- Configure the eureka address in application.yml
- Add @LoadBalanced annotation to RestTemplate Remote call with the service name of the service provider
Ribbon load balancing principle
Ribbon (homophonic: Ruiben), I am afraid that my reading is not standard, please remember
After the Eureka service registration is done above, the service is automatically pulled and load balancing is completed. Then when to automatically pull, when to do load balancing, let's explore the principle of load balancing
Principle of load balancing
1. When a service consumer initiates a remote call service request
2. The LoadBalancerIntercepor will intercept the request
and do several things:
request.getURI()
: Get the request uri, in this case it is http://user-service/user/8originalUri.getHost()
: Get the host name of the uri path, which is actually the service id, that isuser-service
this.loadBalancer.execute()
: Process service id, and user requests.
3. The next step of the breakpoint is to continue to follow up the execute method above (this step completes the acquisition of the corresponding services registered in eureka and the specified load balancing strategy)
- getLoadBalancer(serviceId): Get ILoadBalancer according to the service id, and LoadBalancer will take the service id to eureka to get the service list and save it.
- getServer(loadBalancer): Use the built-in load balancing algorithm to select one from the service list. In this example, you can see that the service on port 8082 has been obtained
4. Load balancing strategy IRule
In the above code, you can see that the acquisition service uses a getServer
method to do load balancing:
The following is the source code follow-up, keep following to the end, and see who is helping us with load balancing
Finally, the following is a load balancing flow chart,This diagram will make it easier to understand the process of calling load balancing from requesting remote services
The basic process is as follows :
- Intercept our RestTemplate request http://userservice/user/1
- RibbonLoadBalancerClient will get the service name from the request url, which is user-service
- DynamicServerListLoadBalancer pulls the service list from eureka according to user-service
- eureka returns the list, localhost:8081, localhost:8082
- IRule uses built-in load balancing rules, select one from the list, such as localhost:8081
- RibbonLoadBalancerClient modifies the request address, replaces userservice with localhost:8081, gets http://localhost:8081/user/1, and initiates a real request
load balancing strategy
Ribbon's load balancing rules are defined by an interface called IRule, and each sub-interface is a rule :
The meanings of the different rules are as follows:
Built-in load balancing rule class | Rule description |
---|---|
RoundRobinRule | Simply poll the list of services to select a server. It is the default load balancing rule of Ribbon. |
AvailabilityFilteringRule | Ignore the following two servers: (1) By default, if this server fails to connect 3 times, this server will be set to the "short circuit" state. The short-circuit state will last for 30 seconds, and if the connection fails again, the duration of the short-circuit will increase geometrically. (2) Servers with too high concurrency. If the number of concurrent connections of a server is too high, the client configured with the AvailabilityFilteringRule rule will also ignore it. The upper limit of the number of concurrent connections can be configured by the ..ActiveConnectionsLimit property of the client. |
WeightedResponseTimeRule | Assign a weight value to each server. The longer the server response time, the less weight this server has. This rule will randomly select a server, and this weight value will affect the server selection. |
ZoneAvoidanceRule | Server selection is based on the servers available in the region. Use Zone to classify servers. This Zone can be understood as a computer room, a rack, etc. Then poll multiple services in the Zone. |
BestAvailableRule | Ignore servers that are short-circuited and choose servers with lower concurrency. |
RandomRule | Randomly select an available server. |
RetryRule | Selection logic for the retry mechanism |
The default implementation is ZoneAvoidanceRule, which is a polling scheme
Custom load balancing strategy
The load balancing rules can be modified by defining the IRule implementation. There are two ways:
- Code method: In the OrderApplication class (startup class) in order-service, define a new IRule:
@Bean
public IRule randomRule(){
return new RandomRule();
}
This is a global configuration, and the order-service will follow the strategy of this configuration when calling other microservices
2. Configuration file method: In the application.yml file of order-service, adding new configurations can also modify the rules:
userservice: # 给某个微服务配置负载均衡规则,这里是userservice服务
ribbon:
NFLoadBalancerRuleClassName: com.netflix.loadbalancer.RandomRule # 负载均衡规则
The configuration added in the yml file is only valid for the current microservice and is a local configuration
Note that the default load balancing rules are generally used without modification.
lazy loading
Ribbon uses lazy loading by default, that is, the LoadBalanceClient is created only when it is accessed for the first time, and the request time will be very long.
Hunger loading will be created when the project starts to reduce the time-consuming for the first visit. Enable hunger loading through the following configuration:
ribbon:
eager-load:
enabled: true
clients: userservice
Nacos Registration Center
Since domestic companies generally respect Alibaba's technology, such as the registration center, SpringCloudAlibaba also launched a registration center called Nacos, which has more functions and is more widely used than Eureka.
Know how to install nacos
nacos1.4.1 download: nacos download
Extraction code: olww
After the download is complete, in the bin directory, type cmd to start
Enter the following command, because it is a cluster startup by default, here it is set to a single startup
startup.cmd -m standalone
The browser keeps up with this address
http://localhost:8848/nacos/index.html
The username and password are both nacos, log in
Nacos Quick Start
1.父工程导入SpringCloudAlibaba的依赖
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-alibaba-dependencies</artifactId>
<version>2.2.6.RELEASE</version>
<type>pom</type>
<scope>import</scope>
</dependency>
2.在user-service和order-service中的pom文件中引入nacos-discovery依赖
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
Remember to comment out the original eureka dependency
3.配置nacos地址
Add nacos configuration to the yml file of the project that imports nacos dependencies
spring:
cloud:
nacos:
server-addr: localhost:8848
4.启动服务,然后再前面打开的nacos网页中查看
Nacos service hierarchical storage model
Generally, large factories will do clusters for disaster recovery to ensure that when the local service is down, it can still operate normally (access services elsewhere). For example,
when its Hangzhou server fails and causes a downtime, then it will access the service in Shanghai to come Guaranteeing the normal operation of the function
is a bit of a spare tire for each other
Nacos divides the instances in the same computer room into a cluster .
user-service is a service. A service can contain multiple clusters, such as Hangzhou and Shanghai. Each cluster can have multiple instances to form a hierarchical model . ——> In fact, it is divided intoService - Cluster - InstanceWhen these three layers
of microservices access each other, they should access the same cluster instance as much as possible, because the local access speed is faster. Only access other clusters when the cluster is unavailable
Configure the cluster for the service
Add to the yml configuration file
random name
NacosRule load balancing strategy
userservice:
ribbon:
NFLoadBalancerRuleClassName: com.alibaba.cloud.nacos.ribbon.NacosRule # 负载均衡规则
①Select the service instance list of the same cluster first
. ②If the local cluster cannot find a provider, go to other clusters to find it, and a warning will be issued. ③After confirming
the list of available instances, use random load balancing to select instances.
Weight load balancing
In actual deployment, such a scenario will appear:
The performance of server equipment is different. Some instances have better performance, while others have poorer performance. We hope that machines with better performance can bear more user requests.
But by default, NacosRule is randomly selected in the same cluster, without considering the performance of the machine.
But we need to consider that those who can do more work, so the weight configuration comes. According to different weight settings, the frequency of visits can be controlled. The greater the weight, the higher the frequency of visits.
When the weight is 0, there will be no access through the server
In the past, if a service needs to be updated and upgraded, the service needs to be restarted, but it is impossible to upgrade in broad daylight, resulting in failures for users to access, and they often upgrade secretly in the dead of night. not convenient.
One of the functions of the weight strategy is that when the project is updated and upgraded, the weight of the corresponding server is lowered, and a small number of users are put in for testing to see if the function just launched is passed, so as to achieve a smooth upgrade.
For example, a certain game sometimes issues an announcement and keeps updating the server, that’s how it came about
Instance weight control
①The Nacos console can set the weight value of the instance
②Multiple instances in the same cluster between 0 and 1, the higher the weight, the higher the frequency of access
③If the weight is set to 0, it will not be accessed at all
Nacos environment isolation
By default, all services, data, and groups are in the same namespace, named public
Nacos provides a namespace to achieve environmental isolation.
- There can be multiple namespaces in nacos
- There can be group, service, etc. under the namespace
- Different namespaces are isolated from each other, such as services in different namespaces are invisible to each other
The specific operation is as follows
Nacos environment isolation
The namespace is used for environment isolation, each namespace has a unique id, and services under different namespaces are invisible
The difference between Nacos and Eureka
Nacos (homophonic: Nagram 4); Eureka (homophonic: Yiruika) My own English reading is not standard, so record it, so that I can only read spelling when chatting with others next time. Make others look confused and say that you and I are talking about the same technology?
back to business
Nacos actively pushes the registered service list to service consumers. If a service is down, it will immediately push a new service list.
Eureka regularly pulls the service list from the registration center, so its update efficiency of the service list is slightly lower than that of Nacos.
Nacos service instances are divided into two types:
-
Temporary instance: If the instance is down for more than a certain period of time (without actively sending heartbeat information), it will be removed from the service list, the default type.
-
Non-temporary instance: nacos will actively ask for the heartbeat information of the instance. If the instance goes down, it will not be removed from the service list. It can also be called a permanent instance.
Set instance type in configuration
final summary
Nacos与eureka的共同点
- Both support service registration and service pull
- Both support the service provider's heartbeat method for health detection
Nacos与Eureka的区别
- Nacos supports the server to actively detect the provider status: the temporary instance adopts the heartbeat mode, and the non-temporary instance adopts the active detection mode
- Temporary instances with abnormal heartbeat will be removed, while non-temporary instances will not be removed
- Nacos supports the message push mode of service list changes, and the service list updates more timely
- The Nacos cluster adopts the AP mode by default. When there are non-temporary instances in the cluster, the CP mode is adopted; Eureka adopts the AP mode
Nacos management configuration
Unified configuration management
scenes to be used
When there are too many microservices in a cluster, and there are thousands or hundreds of them, the configuration information of one of the microservices needs to be changed, and the other thousands or hundreds of services that remotely call the service need to be restarted, which is almost impossible in a production environment. impossible
Therefore, we need a unified configuration management solution that can centrally manage the configuration of all instances
On the one hand, Nacos can centrally manage the configuration, and on the other hand, when the configuration changes, the microservice can be notified in time to realize hot update of the configuration
Add configuration information
Note: The core configuration of the project needs to be managed by nacos only when the hot update configuration is required. It is better to save some configurations that will not be changed locally in the microservice.
Pull services from microservices
The microservice needs to pull the configuration managed in nacos and merge it with the local application.yml configuration to complete the project startup.
The configuration information such as the address of nacos is in applicationyml, but if application.yml has not been read, how to know the address of nacos
So spring introduces a new configuration file: bootstrap.yaml file, which will be read before application.yml Pick
1.导入Nacos配置管理依赖
<!--nacos配置管理依赖-->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
2.添加bootstrap.yaml
Then, add a bootstrap.yaml file in the resource folder
spring:
application:
name: userservice # 服务名称
profiles:
active: dev #开发环境,这里是dev
cloud:
nacos:
server-addr: localhost:8848 # Nacos地址
config:
file-extension: yaml # 文件后缀名
Here, the nacos address will be obtained according to spring.cloud.nacos.server-addr, and then the corresponding configuration will be read according to name, active, and extension
3.添加nacos配置并读取
Add business logic in UserController in user-service, and read the pattern.dateformat configuration added in Nacos:
Complete date formatting according to our specified format and return
Indicates that the configuration information in Nacos has been pulled successfully
Configure Hot Update
After the configuration file in Nacos is changed, the microservice can perceive it without restarting (that is, refreshing the webpage directly will update the configuration). It can be achieved by the following two configuration methods:
method one
Add the annotation @RefreshScope to the class where the variable injected by @Value is located:
way two
Use the @ConfigurationProperties annotation instead of the @Value annotation.
In the user-service service, add a class to read the patternern.dateformat property:
@Component
@Data
@ConfigurationProperties(prefix = "pattern")
public class PatternProperties {
private String dateformat;
}
Use this class instead of @Value in UserController:
configuration sharing
Some properties have the same value in multiple environments such as development and testing. In order to avoid modifying the value of the configuration, one by one modification is required; the configuration sharing method is quoted, and the same configuration is placed in the shared configuration. Like a public static modified variable in a class
When the microservice starts, it will go to nacos to read multiple configuration files, for example:
-
[spring.application.name]-[spring.profiles.active].yaml
, for example: userservice-dev.yaml -
[spring.application.name].yaml
, for example: userservice.yaml
Does not[spring.application.name].yaml
contain environments, so can be shared by multiple environments.
Configuration priority
Remote dedicated configuration > remote shared configuration > local configuration
Build a Nacos cluster
In the learning stage, there are not so many machines, so we can only make a simplified version, and configure the services on the local one
Premise: Build a mysql cluster, initialize the database table, and only use one mysql database if conditions are limited
①Decompress the nacos compressed package
②Enter the conf directory of nacos, modify the configuration file cluster.conf.example, and rename it to cluster.conf:
③ Then add content: (Because only three servers are allocated locally and there are no three servers, so they are all local ip, and the port can be selected if it is not used)
127.0.0.1:8845
127.0.0.1.8846
127.0.0.1.8847
④ Modify the application.properties file and add the database configuration
⑤ Copy the nacos folder three times, and then modify the application.properties in the three folders respectively,
nacos1:
server.port=8845
nacos2:
server.port=8846
nacos3:
server.port=8847
⑥ Then start three nacos nodes,
It is the startup.cmd in the bin directory, because it starts the cluster by default, just double-click it
⑦Use nginx for reverse proxy
Modify the conf/nginx.conf file, the configuration is as follows:
Just copy it in directly
upstream nacos-cluster {
server 127.0.0.1:8845;
server 127.0.0.1:8846;
server 127.0.0.1:8847;
}
server {
listen 80;
server_name localhost;
location /nacos {
proxy_pass http://nacos-cluster;
}
}
⑧ The application.yml file configuration in the code is as follows:
spring:
cloud:
nacos:
server-addr: localhost:80 # Nacos地址
At this time, the new configuration created in nacos will be stored in the database to complete the persistence
optimization
-
In actual deployment, it is necessary to set a domain name for the nginx server that acts as a reverse proxy, so that there is no need to change the configuration of the nacos client if the server is migrated later.
-
Each node of Nacos should be deployed to multiple different servers for disaster recovery and isolation
Feign remote call
The service registration is completed before, and the configuration center nacos is related, but the part of the service pull is to use the RestTemplate
The code that used the RestTemplate to initiate a remote call before:
There are the following problems:
• Poor code readability and inconsistent programming experience
• URLs with complex parameters are difficult to maintain
Feign is a declarative http client, its role is to help us elegantly implement the sending of http requests and solve the problems mentioned above.
Feign replaces RestTemplate
①引入Feign依赖
Introduce the feign dependency in the pom file:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
②添加注解
Add annotations to the startup class to enable Feign's functionality:
③编写Feign的客户端
The following is an example of the order-service of the previous demo
@FeignClient("userservice")
public interface UserClient {
@GetMapping("/user/{id}")
User findById(@PathVariable("id") Long id);
}
This client is mainly based on SpringMVC annotations to declare remote call information, such as:
- Service name: userservice
- Request method: GET
- Request path: /user/{id}
- Request parameter: Long id
- Return value type: User
In this way, Feign can help us send http requests without using RestTemplate to send them ourselves.
④在业务方法中替换以前的RestTemplate
It will not add url to the business code as before, the readability is very low, and the code is not concise
Finally, you can see that the remote call is also completed at the end, and the code is more concise. At the same time, after a few more attempts, you will find that feign not only realizes service pull, but also realizes load balancing
⑤总结
Steps to use Feign:
① Introduce dependency
② Add @EnableFeignClients annotation
③ Write the FeignClient interface
④ Use the method defined in FeignClient instead of RestTemplate
custom configuration
Feign can support many custom configurations, as shown in the following table:
type | effect | illustrate |
---|---|---|
feign.Logger.Level | Modify log level | Contains four different levels: NONE, BASIC, HEADERS, FULL |
feign.codec.Decoder | Parser for the response result | Parsing the results of http remote calls, such as parsing json strings into java objects |
feign.codec.Encoder | request parameter encoding | Encode request parameters for sending via http requests |
feign. Contract | Supported Annotation Formats | The default is the annotation of SpringMVC |
feign. Retryer | Failure retry mechanism | The retry mechanism for request failure, the default is no, but Ribbon's retry will be used |
Under normal circumstances, the default value is enough for us to use. If you want to customize it, you only need to create a custom @Bean to override the default Bean.
configuration file method
Modifying feign's log level based on the configuration file can target a single service:
feign:
client:
config:
userservice: # 针对某个微服务的配置
loggerLevel: FULL # 日志级别
It is also possible to target all services:
feign:
client:
config:
default: # 这里用default就是全局配置,如果是写服务名称,则是针对某个微服务的配置
loggerLevel: FULL # 日志级别
Java code method
You can also modify the log level based on Java code, first declare a class, and then declare a Logger.Level object:
public class DefaultFeignConfiguration {
@Bean
public Logger.Level feignLogLevel(){
return Logger.Level.BASIC; // 日志级别为BASIC
}
}
If you want to take effect globally , put it in the @EnableFeignClients annotation of the startup class:
@EnableFeignClients(defaultConfiguration = DefaultFeignConfiguration .class)
If it is locally effective , put it in the corresponding @FeignClient annotation:
@FeignClient(value = "userservice", configuration = DefaultFeignConfiguration .class)
The log level is divided into four types:
- NONE: Do not record any log information, which is the default value.
- BASIC: Only log the request method, URL, response status code and execution time
- HEADERS: On the basis of BASIC, the header information of the request and response is additionally recorded
- FULL: Record details of all requests and responses, including header information, request body, and metadata.
You can use FULL when debugging errors, but usually use NONE and BASIC
Feign performance optimization
The bottom layer of Feign initiates http requests and relies on other frameworks. Its underlying client implementation includes:
• URLConnection: The default implementation does not support connection pooling, so the performance is not very good, because the connection pool can reduce the connection loss of connection creation and destruction (because each connection requires three handshakes and four waved hands)
• Apache HttpClient: support connection pool
• OKHttp: support connection pool
Therefore, the main means to improve the performance of Feign is to use the connection pool instead of the default URLConnection.
Apache's HttpClient is used here to demonstrate.
①引入依赖
<!--httpClient的依赖 -->
<dependency>
<groupId>io.github.openfeign</groupId>
<artifactId>feign-httpclient</artifactId>
</dependency>
②配置文件中做相应的配置
feign:
client:
config:
default: # default全局的配置
loggerLevel: BASIC # 日志级别,BASIC就是基本的请求和响应信息
httpclient:
enabled: true # 开启feign对HttpClient的支持
max-connections: 200 # 最大的连接数
max-connections-per-route: 50 # 每个路径的最大连接数
In summary, Feign's optimization:
1. Try to use basic as the log level
2. Use HttpClient or OKHttp instead of URLConnection
-
① Introduce feign-httpClient dependency
-
② The configuration file enables the httpClient function and sets the connection pool parameters
Best Practices for Feign
The best practice is the experience summed up by the predecessors after constantly stepping on the pit, and it is also the best way to use Feign
feign client:
UserController:
Observation shows that Feign's client is very similar to the controller code of the service provider. In order to simplify this repetitive code writing, there are two implementation methods below
Inheritance
The same code can be shared through inheritance:
1) Define an API interface, use the definition method, and make a statement based on SpringMVC annotations.
2) Both the Feign client and the Controller integrate the interface
advantage:
- simple and easy
- code sharing
shortcoming:
-
Service provider and service consumer are tightly coupled
-
The annotation mapping in the parameter list will not be inherited, so the method, parameter list, and annotation must be declared again in the Controller
Extraction method
Extract Feign's Client as an independent module, and put the POJO related to the interface and the default Feign configuration into this module, and provide it to all consumers.
For example, the default configurations of UserClient, User, and Feign are all extracted into a feign-api package, and all microservices can be used directly by referencing this dependency package.
Disadvantages: Some dependencies that are not required by services are also introduced uniformly
Summarize
Best practices for Feign:
①让controller和FeignClient继承同一接口
②将FeignClient、POJO、Feign的默认配置都定义到一个项目中,供所有消费者使用
Code
The following is the implementation of the second method - extraction
The first step is to create a feign module as a unified api, and copy the UserClient, User, and DefaultFeignConfiguration written in the order-service in the previous demo to the feign-api project
The second step is to introduce feign's starter dependency in feign-api
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
In the third step, you can delete the entity class in the previous order-service and the client of feign, and import the newly written eign-api module in its pom file
Modify all the package import parts related to the above three components in order-service, and change it to import the package in feign-api
The fourth step is to inject into the Spring container
When the defined FeignClient is not in the scope of SpringBootApplication's scan package, these FeignClients cannot be used. There are two ways to solve it:
Method 1: Specify the package where FeignClient is located
@EnableFeignClients(basePackages = "cn.itcast.feign.clients")
Method 2: Specify FeignClient bytecode
@EnableFeignClients(clients = {
UserClient.class})
It is generally recommended to use the second type, precise strike
Gateway service gateway
Get to know the Gateway first
Gateway is the gatekeeper of our services and the unified entrance of all microservices.
The three core functions of the gateway are as follows
①Privilege control : As the entrance of microservices, the gateway needs to verify whether the user is eligible for the request, and intercept it if not.
②Routing and load balancing : All requests must first pass through the gateway, but the gateway does not process business, but forwards the request to a microservice according to certain rules. This process is called routing. Of course, when there are multiple target services for routing, load balancing is also required.
③Limiting : When the request flow is too high, the gateway will release the request according to the speed that the downstream microservice can accept to avoid excessive service pressure .
There are two types of gateway implementations in Spring Cloud:
- gateway
- zul
Zuul is a Servlet-based implementation and belongs to blocking programming. Spring Cloud Gateway is based on WebFlux provided in Spring 5, which is an implementation of responsive programming and has better performance.
Gateway Quick Start
To realize the basic routing function of the gateway, the basic steps are as follows:
- Create a SpringBoot project gateway and introduce gateway dependencies
- Write startup class
- Write basic configuration and routing rules
- Start the gateway service for testing
Code
1.创建一个gateway模块作为服务,引入gateway和nacos服务发现依赖
It is recommended to create a maven project, if you want to change the version of the boot project
<!--网关-->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
<!--nacos服务发现依赖-->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
2.在gateway模块中编写启动类
@SpringBootApplication
public class GatewayApplication {
public static void main(String[] args) {
SpringApplication.run(GatewayApplication.class, args);
}
}
3.配置yml文件,给其添加对应配置信息
server:
port: 10010 # 网关端口
spring:
application:
name: gateway # 服务名称
cloud:
nacos:
server-addr: localhost:8848 # nacos地址
gateway:
routes: # 网关路由配置
- id: user-service # 路由id,自定义,只要唯一即可
# uri: http://127.0.0.1:8081 # 路由的目标地址 http就是固定地址
uri: lb://userservice # 路由的目标地址 lb就是负载均衡,后面跟服务名称
predicates: # 路由断言,也就是判断请求是否符合路由规则的条件
- Path=/user/** # 这个是按照路径匹配,只要以/user/开头就符合要求
Path
Proxy all uri
requests matching the rules to the address specified by the parameter.
We proxy the request /user/**
beginning with lb://userservice
, lb is load balancing, and pull the service list according to the service name to achieve load balancing.
4.启动网关服务,访问网关服务端口,测试结果如下图,可以通过网关然后访问到服务
Error 503
The new version of nacos must add spring-cloud-starter-loadbalancer dependency to replace the ribbon
Flowchart of gateway routing
Finally, summarize the process steps
Gateway construction steps:
-
Create a project, introduce nacos service discovery and gateway dependencies
-
Create a GatewayApplication startup class
-
Configure application.yml, including basic service information, nacos address, routing
Routing configuration includes:
-
Route id: the unique identifier of the route (usually the service name but not repeated)
-
Routing destination (uri): the destination address of the routing, http stands for fixed address, lb stands for load balancing based on service name
-
Routing assertions (predicates): rules for judging routing,
-
Route filters (filters): process the request or response
assertion factory
The assertion rules we write in the configuration file are just strings, which will be read and processed by the Predicate Factory, and converted into routing judgment conditions. For example,
Path=/user/** is matched according to the path
Affirmation(assertion): It is a kind of first-order logic in the program (such as: a logical judgment formula that the result is true or false), the purpose is to express and verify the expected result of the software developer - when the program executes to the position of the assertion , the corresponding assertion should be true. If the assertion is not true, the program will abort execution and give an error message.
The vernacular is judgment, the return value is true or false
There are more than a dozen assertion factories in SpringCloudGateway:
name | illustrate | example |
---|---|---|
After | is a request after a certain point in time | - After=2037-01-20T17:42:47.789-07:00[America/Denver] |
Before | is a request before some point in time | - Before=2031-04-13T15:14:47.433+08:00[Asia/Shanghai] |
Between | is a request before a certain two points in time | - Between=2037-01-20T17:42:47.789-07:00[America/Denver], 2037-01-21T17:42:47.789-07:00[America/Denver] |
Cookie | Requests must contain certain cookies | - Cookie=chocolate, ch.p |
Header | Requests must contain certain headers | - Header=X-Request-Id, \d+ |
Host | The request must be to access a certain host (domain name) | - Host=.somehost.org,.anotherhost.org |
Method | The request method must be specified | - Method=GET,POST |
Path | The request path must conform to the specified rules | - Path=/red/{segment},/blue/** |
Query | The request parameters must contain the specified parameters | - Query=name, Jack or - Query=name |
RemoteAddr | The requester's ip must be in the specified range | - RemoteAddr=192.168.1.1/24 |
Weight | weight processing |
You don’t need to memorize it, you can check it as you use it, and you can’t remember it. Generally, you only need to master the routing project of Path
Summarize:
What is the role of PredicateFactory?
Read user-defined assertion conditions and make judgments on requests
What does Path=/user/** mean?
If the path starts with /user, it is considered to be in compliance
filter factory
GatewayFilter is a filter provided in the gateway, which can process the requests entering the gateway and the responses returned by microservices:
Types of Route Filters
Spring provides 31 different route filter factories. The following are several common filters:
name | illustrate |
---|---|
AddRequestHeader | Add a request header to the current request |
RemoveRequestHeader | Remove a request header from the request |
AddResponseHeader | Add a response header to the response result |
RemoveResponseHeader | 从响应结果中移除有一个响应头 |
RequestRateLimiter | 限制请求的流量 |
这里以请求投过滤器为例,来写个案例示范
需求:给所有进入userservice的请求添加一个请求头:Hello World
只需要修改gateway服务的application.yml文件,添加路由过滤即可:
spring:
cloud:
gateway:
routes:
- id: user-service
uri: lb://userservice
predicates:
- Path=/user/**
filters: # 过滤器
- AddRequestHeader=Head, Hello World # 添加请求头
测试效果
结果如下
默认过滤器
上面加的过滤器是只针对对应的路由有效,若要像对所有路由都有效,就可以配置默认过滤器
如果要对所有的路由都生效,则可以将过滤器工厂写到default下
spring:
cloud:
gateway:
routes:
- id: user-service
uri: lb://userservice
predicates:
- Path=/user/**
default-filters: # 默认过滤项
- AddRequestHeader=Head, Hello World
总结
过滤器的作用是什么?
① 对路由的请求或响应做加工处理,比如添加请求头
② 配置在路由下的过滤器只对当前路由的请求生效
defaultFilters的作用是什么?
对所有路由都生效的过滤器
全局过滤器
虽然默认过滤器已经实现了全局过滤路由的功能了,但是不能自定义,无法进行定制过滤
全局过滤器的作用也是处理一切进入网关的请求和微服务响应,与GatewayFilter的作用一样。区别在于GatewayFilter通过配置定义,处理逻辑是固定的;而GlobalFilter的逻辑需要自己写代码实现。
定义方式是实现GlobalFilter接口。
在filter中编写自定义逻辑,可以实现下列功能:
- 登录状态判断
- 权限校验
- 请求限流等
自定义全局过滤器
范例
需求:定义全局过滤器,拦截请求,判断请求的参数是否满足下面条件:
-
参数中是否有authorization,
-
authorization参数值是否为admin
如果同时满足则放行,否则拦截
在gateway中定义一个过滤器:
这个@Order(-1)是指定过滤器执行的顺序,比如有很多过滤器时,这个就是指定谁先执行谁后执行,值越小,越先执行
@Order(-1)
@Component
public class AuthorizeFilter implements GlobalFilter {
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
// 1.获取请求参数
MultiValueMap<String, String> params = exchange.getRequest().getQueryParams();
// 2.获取authorization参数
String auth = params.getFirst("authorization");
// 3.校验
if ("admin".equals(auth)) {
// 放行
return chain.filter(exchange);
}
// 4.拦截
// 4.1.禁止访问,设置状态码
exchange.getResponse().setStatusCode(HttpStatus.FORBIDDEN);
// 4.2.结束处理
return exchange.getResponse().setComplete();
}
}
结果如下图
过滤器执行顺序
请求进入网关会碰到三类过滤器:当前路由的过滤器、DefaultFilter、GlobalFilter
After requesting routing, the current routing filter, DefaultFilter, and GlobalFilter will be merged into a filter chain (collection), and each filter will be executed in turn after sorting:
sorting rules
- Each filter must specify an int type order value, the smaller the order value, the higher the priority, and the higher the execution order .
- GlobalFilter specifies the order value by implementing the Ordered interface, or adding the **@Order** annotation, which is specified by ourselves
- The order of routing filters and defaultFilter is specified by Spring, and the default is to increase from 1 according to the order of declaration.
- When the order value of the filter is the same, it will followdefaultFilter > Route Filter > GlobalFilterexecuted in sequence.
cross-domain issues
Cross-domain: Inconsistent domain names are cross-domain, mainly including:
-
Different domain names: www.taobao.com and www.taobao.org and www.jd.com and miaosha.jd.com
-
Same domain name, different ports: localhost:8080 and localhost8081
Cross-domain problem: The browser prohibits the originator of the request from cross-domain ajax requests with the server, and the request is intercepted by the browser. Solution
: CORS
CORS Detailed Explanation
Solve cross-domain problems
In the application.yml file of the gateway service, add the following configuration:
spring:
cloud:
gateway:
globalcors: # 全局的跨域处理
add-to-simple-url-handler-mapping: true # 解决options请求被拦截问题
corsConfigurations:
'[/**]':
allowedOrigins: # 允许哪些网站的跨域请求
- "http://localhost:8090"
allowedMethods: # 允许的跨域ajax的请求方式
- "GET"
- "POST"
- "DELETE"
- "PUT"
- "OPTIONS"
allowedHeaders: "*" # 允许在请求中携带的头信息
allowCredentials: true # 是否允许携带cookie
maxAge: 360000 # 这次跨域检测的有效期
to be continued
Due to space limitations, the use and analysis of the remaining components of microservices: Docker, MQ, ES will continue to be recorded in the next part. Thank you for reading