Consul is a from HashiCorp company's revenue for service management software, the Spring Cloud Consul for its packaging. Consul has the following characteristics:
-
Service Registration - automatic registration and deregistration of service instances network location
-
Health Check - it detected when the service instance is up and running
-
Distributed Configuration - to ensure that all service instances using the same configuration
Consul agent has two operating modes: Server and Client. Here's Server and Client just Consul cluster level distinction, regardless of the application services built on top of the Cluster. State Consul agent node Server mode for cluster maintenance Consul, Consul Cluster official recommended that each have at least three or more runs in Server mode of Agent, Client node limitation.
Installation Consul
Consul Download: https://www.consul.io/downloads.html , select Linux 64bit version of this article presentation.
Download, unzip, and run in the extracted directory ./consul
command:
We can see the command Consul contained use consul [命令] --help
may view specific usage of a command.
Execute the following command to start a Consul agent:
1 |
./consul agent -dev -client 192.168.140.215 |
-dev
It represents the creation of server nodes in a development environment, in this mode will not have any persistence operations that do not have any data is written to disk, so this mode is suitable for development, not for production environments. -client 192.168.140.215
Represent run the client using the ip address 192.168.140.215
(the IP address of this article on Linux) to access.
After the start, the default port number is 8500, access http://192.168.140.215:8500
Currently it is a consul service. Next, we started creating service providers and service consumers.
Server-Provider
Create a Spring Boot project, version 2.0.2.RELEASE, artifactId
for the server-provider, Spring Cloud version Finchley.RELEASE:
1 |
<properties> |
然后在配置文件里添加如下配置:
1 |
server: |
spring.cloud.consul.host
和spring.cloud.consul.port
配置了consul的ip和端口;spring.cloud.consul.discovery.service-name
配置了该服务在consul里注册的服务名称;spring.cloud.consul.discovery.register-health-check
用于开启健康检查,spring.cloud.consul.discovery.health-check-interval
配置了健康检查的周期为10秒,spring.cloud.consul.discovery.health-check-path
配置了健康检查路径。
接着新建TestController:
1 |
|
check
方法用于监控检查,TestController还提供了一个hello
方法,以供后续服务消费者调用。
spring.cloud.consul.discovery.health-check-path
的默认值为/actuator/health
,如果采用该默认值的话,还需要导入spring-boot-starter-actuator
依赖。
最后,要开启服务注册与发行,需要在Spring Boot入口类上添加@EnableDiscoveryClient
注解:
1 |
|
准备完毕后,打包项目,然后启动两个实例,端口号分别为9000和9001,启动后,再次访问consul管理界面:
服务提供者注册成功,接下来开始搭建服务消费者。
Server-Consumer
创建一个Spring Boot项目,版本为2.0.2.RELEASE,artifactId为server-provider,Spring Cloud版本为Finchley.RELEASE:
1 |
<properties> |
引入spring-boot-starter-actuator
用于默认的健康检查。
配置application.yml:
1 |
server: |
同样的,需要开启服务注册与发现需要在入口类上添加@EnableDiscoveryClient
注解。
接着创建TestController来消费Server-Provider提供的hello
服务:
1 |
|
SERVER_ID的值为服务提供者在consul注册中心的实例名称,即server-provider
。通过DiscoveryClient
我们可以获取到所有名称为server-provider
的服务实例信息。通过LoadBalancerClient
我们可以实现负载均衡地去获取服务实例,并通过RestTemplate
去调用服务。
打包部署项目,然后查看consul控制台:
访问:http://192.168.140.215:9002/uri:
可以看到我们成功获取到了服务名称为server-provider
的两个具体实例。
多次调用http://192.168.140.215:9002/hello:
控制台输出如下:
服务调用是均衡的。
除此之外,consul内置了Ribbon,所以我们还可以通过@LoadBalanced
标注的RestTemplate
来实现负载均衡服务调用:
1 |
|
效果是一样的。
consul集群
上面我们只是以-dev
模式开启了一个单节点consul agent,生产环境下需要搭建consul集群来确保高可用。
搭建consul集群时常用的命令有:
命令 | 解释 | 示例 |
---|---|---|
agent | 运行一个consul agent | consul agent -dev |
join | 将agent加入到consul集群 | consul join IP |
members | 列出consul cluster集群中的members | consul members |
leave | 将节点移除所在集群 | consul leave |
准备了三台Linux服务器,配置如下:
序号 |
节点ip |
节点名称 |
角色 |
---|---|---|---|
1 |
192.168.140.215 |
consul-server-215 |
server |
2 |
192.168.140.213 |
consul-server-213 |
server |
3 |
192.168.140.216 |
consul-server-216 |
server & web ui |
在这三台服务器上下载并解压consul,然后在解压的根目录上创建一个data目录。
由于我们之前已经在215上启动了consul,所以先执行killall consul
来杀掉进程,然后执行下面这条命令:
1 |
nohup ./consul agent -server -bind 192.168.140.215 -client=0.0.0.0 -bootstrap-expect=3 -data-dir=data -node=consul-server-215 & |
解释一下上面这条命令的含义:
-
-server
表示以服务的形式启动agent -
-bind
表示绑定到当前Linux的ip(有些服务器会绑定多块网卡,可以通过bind参数强制指定绑定的ip) -
-client
指定客户端访问的ip(consul有丰富的api接口,这里的客户端指浏览器或调用方),0.0.0.0表示不限客户端ip -
-bootstrap-expect=3
表示server集群最低节点数为3,低于这个值将工作不正常(注:类似zookeeper一样,通常集群数为奇数,方便选举,consul采用的是raft算法) -
-data-dir
表示指定数据的存放目录(该目录必须存在) -
-node
表示节点的名称
接着在213服务器上执行下面这条命令:
1 |
nohup ./consul agent -server -bind 192.168.140.213 -client=0.0.0.0 -bootstrap-expect=3 -data-dir=data -node=consul-server-213 & |
最后在216上执行下面这条命令:
1 |
nohup ./consul agent -server -bind 192.168.140.216 -client=0.0.0.0 -bootstrap-expect=3 -data-dir=data -node=consul-server-216 -ui & |
和前两条命令相比,这条命令多了-ui
选项,表示开启管理界面UI。
然后分别在213和215下执行下面这条命令:
1 |
./consul join 192.168.140.216 |
这样213和215成功加入到了216构成了一个三节点集群,运行./consul members
查看:
访问http://192.168.140.216:8500:
访问http://192.168.140.215:9002/hello:
这时候在215执行killall consul
命令,杀死consul服务,然后在216上执行./consul members
:
可以看到215节点已经挂了,再次访问http://192.168.140.215:9002/hello:
服务依旧获取成功。
Visible, although we configured address consul in application.yml is 192.168.140.215:8500, but since we are building a consul cluster, it will get the information to the entire cluster, even if the node 215 linked to the micro-micro service when the service starts service registration information can be obtained from the other consul node.