Turn: https: //www.jianshu.com/p/fa41434d444a
Foreword
An article on the use Consul
and Registrator
in docker
containers built environment cluster service registration and discovery. On the basis of service discovery and registration on, this article will introduce Nginx
a reverse proxy server and Consul-template
components, to achieve dynamic load balancing service.
text
1. Introduction Tools
1.1. Nginx
A high-performance HTTP
server and reverse proxy, the front-end traffic load balancing and back-end application server forwards the request.
1.2. Consul-template
Consul-template
Is HashiCorp
based on Consul
tools provided scalable, by listening to the Consul
data change dynamically modify some configuration files in the template. Commonly used in Nginx
, HAProxy
the client at the end of a dynamic state of health reverse proxy configuration information.
2. The principle
- By
Nginx
itself, and forwards the request to achieve load balancing; - By
Consul-template
theconfig
function of real-time monitoring ofConsul
changes in cluster nodes and data services; - With real-time
Consul
information of nodes with theNginx
template configuration file and reload the configuration file;
Consul-template
Andnginx
it must be installed on the same machine, because theConsul-template
need to dynamically modify thenginx
configuration filenginx.conf
, and then executenginx -s reload
the command routing updates, to achieve dynamic load balancing purposes.
2.1. Traditional load balancing
The traditional load balancing, is Client
supporting sister visit Nginx
, then forwarded to the back-end of a table Web Server
. If you have a back-end add / delete Web Server
, operation and maintenance manual requires change nginx.conf
, and then reload the configuration, it can dynamically adjust the load balancing.
2.2 Automatic Load Balancing
Look at automatic discovery and registration services based load balancing, load balancing mode has not changed, just a few more peripheral components, of course, these component pairs Client
are not visible, client
still only see the Nginx
entrance, access methods did not change.
Nginx
Dynamic load balancing implementation process is as follows:
- In the same
Consul
label toWeb Server
be service marks and classification, add or deleteWeb Server
server node; Registrator
To monitorWeb Server
status updates, automaticallyConsul
service registry to register it or cancellation;Consul-template
Subscribe to theConsul
service registration message service center receivedConsul
the message push thatWeb Server
service node state changes.Consul-template
Automatically go to modifications and substitutionsNginx
in the servernginx
configuration file template, and reload the service to achieve automatic load balancing.
3. Environmental ready
3.1. System Environment
software | version |
---|---|
operating system | Ubuntu: 16.04 x86_64, kernel: 4.8.0-58-generic |
docker | Docker version 1.12.6, build 78d1802 |
docker-compose | docker-compose version 1.8.0 |
3.2. Node Planning
Host IP | Package |
---|---|
192.168.1.181 | Consul Server, Registrator, Nginx, Consul-template |
192.168.1.186 | Consul Server, Registrator, Nginx, Consul-template |
192.168.1.182 | Consul Client, Registrator, Client WebApp1, Server WebApp1, Server WebApp2 |
192.168.1.183 | Consul Client, Registrator, Client WebApp2, Server WebApp3, Server WebApp4 |
192.168.1.185 | Consul Client, Registrator, Client WebApp3, Server WebApp5, Server WebApp6 |
- Client WebApp:提供基于
Thrift
的RPC
客户端和基于Http
协议的RESTful
客户端,用于访问Server
程序。 - Server WebApp:提供基于
Thrift
的RPC
服务端和基于Http
协议的RESTful
服务端,供Client
程序调用。
这里的3台主机 - 192.168.1.182
、192.168.1.183
和 192.168.1.185
,每台主机部署两个 Client WebApp
容器和一个 Client Server
容器,用于模拟服务层的负载均衡。
3.3. 镜像构建
- Consul:consul:latest
- Registrator:gliderlabs/registrator:latest
- Nginx和Consul-template:liberalman/nginx-consul-template:latest
- Client WebApp:test-client:latest
- Server WebApp:test-server:latest
这里先说说 test-client
和 test-server
的镜像构建:
- 克隆项目到本地项目环境: https://github.com/ostenant/spring-cloud-starter-thrift
- 切换到子模块
spring-cloud-starter-thrift-examples
下的test
目录,执行命令mvn clean package
进行程序打包。 - 分别将
test-client
和test-server
项目根目录下的Dockerfile
文件和target
目录下的target/*.jar
程序拷贝到192.168.1.182
、192.168.1.183
和192.168.1.185
目录下。 - 进入客户端
Dockerfile
所在目录,对客户端程序test-client
进行镜像构建,命令如下:docker build . -t test-client:latest
- 进入服务端
Dockerfile
所在目录,对服务端程序test-server
进行镜像构建,命令如下:docker build . -t test-server:latest
构建完成后查看本地镜像库:
3.4. 部署模型
五台主机,其中 192.168.1.181
和 192.168.1.186
两台主机的主要作用如下:
- 作为负载均衡转发器 (这里只是演示,可以通过
KeepAlived
实现Nginx
的HA
),将前端访问流量经过负载算法一次转发到后台Client WebApp
。 - 以
Server
模式启动Consul
节点,其中一台作为整个服务发现与注册集群的leader
, 用于同步和持久化其余三台Client
模式的Consul
节点的数据和状态信息。
其余三台主机 - 192.168.1.182
、192.168.1.183
和 192.168.1.185
,充当的角色如下:
- 每台分别以
Client
模式部署Consul
节点,用于注册和发现本机docker
容器暴露的服务,同时和Consul Server
的leader
节点进行服务状态同步。 - 分别启动一个
Client WebApp
容器实例和两个Server WebApp
容器实例,将Client WebApp
的请求根据服务层的负载算法二次转发到Server WebApp
中的任意一台上完成具体的业务处理。
这里有两次服务转发操作:
- 接入层的转发:两台
Nginx
服务器将客户流量,经由一次转发至三个Client WebApp
服务实例中任意一个做处理。 - 服务层的转发:三个
Client WebApp
服务实例其中之一,根据从服务注册中心拉取的健康的服务缓存列表,将请求二次转发至六个Server WebApp
服务实例其中之一做处理。
3.5. 开始搭建
3.5.1. Consul Server主机
(a). 分别编写 docker-compose.yml
,注意 Registrator
需要配置各自的 IP
地址。
- 主机:192.168.1.181
docker-compose.yml
version: '2' services: load_balancer: image: liberalman/nginx-consul-template:latest hostname: lb links: - consul_server_master:consul ports: - "80:80" consul_server_master: image: consul:latest hostname: consul_server_master ports: - "8300:8300" - "8301:8301" - "8302:8302" - "8400:8400" - "8500:8500" - "8600:8600" command: consul agent -server -bootstrap-expect 1 -advertise 192.168.1.181 -node consul_server_master -data-dir /tmp/data-dir -client 0.0.0.0 -ui registrator: image: gliderlabs/registrator:latest hostname: registrator links: - consul_server_master:consul volumes: - "/var/run/docker.sock:/tmp/docker.sock" command: -ip 192.168.1.181 consul://192.168.1.181:8500
- 主机:192.168.1.186
docker-compose.yml
version: '2' services: load_balancer: image: liberalman/nginx-consul-template:latest hostname: lb links: - consul_server_slave:consul ports: - "80:80" consul_server_slave: image: consul:latest hostname: consul_server_slave ports: - "8300:8300" - "8301:8301" - "8302:8302" - "8400:8400" - "8500:8500" - "8600:8600" command: consul agent -server -join=192.168.1.181 -advertise 192.168.1.186 -node consul_server_slave -data-dir /tmp/data-dir -client 0.0.0.0 -ui registrator: image: gliderlabs/registrator:latest hostname: registrator links: - consul_server_slave:consul volumes: - "/var/run/docker.sock:/tmp/docker.sock" command: -ip 192.168.1.186 consul://192.168.1.186:8500
(b). 在两台主机上分别通过 docker-compose
启动多容器应用,命令如下:
这是在主机 192.168.1.181
上运行启动命令时的输出,可以看到 docker-compose
启动时会先去检查目标镜像文件是否拉取到本地,然后依次创建并启动 docker-compose.yml
文件配置的容器实例。
(c). 查看正常启动的容器进程,观察Consul
、Registrator
和 Nginx
/Consul-template
的容器都正常启动。
(d). 利用 docker-compose
,以相同的方式在主机 192.168.1.186
上启动所配置的容器服务实例,查看启动状态如下:
(e). 访问 http://IP:8500
查看 Consul Server
的节点信息和服务注册列表。
- 节点信息:
- 服务状态列表:
两台 Consul Server
主机上的容器服务实例均正常启动!
3.5.2. Consul Client主机
一般情况下,我们把 Consul
作为服务注册与发现中心,会使用它提供的服务定义 (Service Definition
) 和健康检查定义 (Health Check Definition
) 功能,相关配置说明参考如下:
服务定义
环境变量Key | 环境变量Value | 说明 |
---|---|---|
SERVICE_ID | web-001 | 可以为GUID或者可读性更强变量,保证不重复 |
SERVICE_NAME | web | 如果ID没有设置,Consul会将name作为id,则有可能注册失败 |
SERVICE_TAGS | nodejs,web | 服务的标签,用逗号分隔,开发者可以根据标签来查询一些信息 |
SERVICE_IP | 内网IP | 要使用Consul,可访问的IP |
SERVICE_PORT | 50001 | 应用的IP, 如果应用监听了多个端口,理应被视为多个应用 |
SERVICE_IGNORE | Boolean | 是否忽略本Container,可以为一些不需要注册的Container添加此属性 |
服健康检查定义
配置原则为: SERVICE_XXX_*
。如果你的应用监听的是 5000
端口,则改为 SERVICE_5000_CHECK_HTTP
,其它环境变量配置同理。
环境变量Key | 环境变量Value | 说明 |
---|---|---|
--- 以下为HTTP模式 | --- | --- |
SERVICE_80_CHECK_HTTP | /path_to_health_check | 你的健康状态检查的路径如 /status |
SERVICE_80_CHECK_INTERVAL | 15s | 15秒检查一次 |
SERVICE_80_CHECK_TIMEOUT | 2s | 状态检查超时时间 |
--- 以下为HTTPS模式 | --- | --- |
SERVICE_443_CHECK_HTTPS | /path_to_health_check | 你的健康状态检查的路径如 /status |
SERVICE_443_CHECK_INTERVAL | 15s | 15秒检查一次 |
SERVICE_443_CHECK_TIMEOUT | 2s | 状态检查超时时间 |
--- 以下为TCP模式 | --- | --- |
SERVICE_443_CHECK_TCP | /path_to_health_check | 你的健康状态检查的路径如 /status |
SERVICE_443_CHECK_INTERVAL | 15s | 15秒检查一次 |
SERVICE_443_CHECK_TIMEOUT | 2s | 状态检查超时时间 |
--- 使用脚本检查 | --- | --- |
SERVICE_CHECK_SCRIPT | curl --silent --fail example.com | 如官方例子中的check_redis.py |
--- 其他 | --- | --- |
SERVICE_CHECK_INITIAL_STATUS | passing | Consul默认注册后的服务为failed |
配置说明
(a). 分别编写 docker-compose.yml
,同样注意 Registrator
需要配置各自的 IP
地址。test-server
和 test-client
的服务实例在配置时需要指定相关的环境变量。
- 主机:192.168.1.182
docker-compose.yml
version: '2' services: consul_client_01: image: consul:latest ports: - "8300:8300" - "8301:8301" - "8301:8301/udp" - "8302:8302" - "8302:8302/udp" - "8400:8400" - "8500:8500" - "8600:8600" command: consul agent -retry-join 192.168.1.181 -advertise 192.168.1.182 -node consul_client_01 -data-dir /tmp/data-dir -client 0.0.0.0 -ui registrator: image: gliderlabs/registrator:latest volumes: - "/var/run/docker.sock:/tmp/docker.sock" command: -ip 192.168.1.182 consul://192.168.1.182:8500 test_server_1: image: test-server:latest environment: - SERVICE_8080_NAME=test-server-http-service - SERVICE_8080_TAGS=test-server-http-service-01 - SERVICE_8080_CHECK_INTERVAL=10s - SERVICE_8080_CHECK_TIMEOUT=2s - SERVICE_8080_CHECK_HTTP=/health - SERVICE_25000_NAME=test-server-thrift-service - SERVICE_25000_TAGS=test-server-thrift-service-01 - SERVICE_25000_CHECK_INTERVAL=10s - SERVICE_25000_CHECK_TIMEOUT=2s - SERVICE_25000_CHECK_TCP=/ ports: - "16000:8080" - "30000:25000" test_server_2: image: test-server:latest environment: - SERVICE_8080_NAME=test-server-http-service - SERVICE_8080_TAGS=test-server-http-service-02 - SERVICE_8080_CHECK_INTERVAL=10s - SERVICE_8080_CHECK_TIMEOUT=2s - SERVICE_8080_CHECK_HTTP=/health - SERVICE_25000_NAME=test-server-thrift-service - SERVICE_25000_TAGS=test-server-thrift-service-02 - SERVICE_25000_CHECK_INTERVAL=10s - SERVICE_25000_CHECK_TIMEOUT=2s - SERVICE_25000_CHECK_TCP=/ ports: - "18000:8080" - "32000:25000" test_client_1: image: test-client:latest environment: - SERVICE_8080_NAME=my-web-server - SERVICE_8080_TAGS=test-client-http-service-01 - SERVICE_8080_CHECK_INTERVAL=10s - SERVICE_8080_CHECK_TIMEOUT=2s - SERVICE_8080_CHECK_HTTP=/features ports: - "80:8080"
- 主机:192.168.1.183
docker-compose.yml
version: '2' services: consul_client_02: image: consul:latest ports: - "8300:8300" - "8301:8301" - "8301:8301/udp" - "8302:8302" - "8302:8302/udp" - "8400:8400" - "8500:8500" - "8600:8600" command: consul agent -retry-join 192.168.1.181 -advertise 192.168.1.183 -node consul_client_02 -data-dir /tmp/data-dir -client 0.0.0.0 -ui registrator: image: gliderlabs/registrator:latest volumes: - "/var/run/docker.sock:/tmp/docker.sock" command: -ip 192.168.1.183 consul://192.168.1.183:8500 test_server_1: image: test-server:latest environment: - SERVICE_8080_NAME=test-server-http-service - SERVICE_8080_TAGS=test-server-http-service-03 - SERVICE_8080_CHECK_INTERVAL=10s - SERVICE_8080_CHECK_TIMEOUT=2s - SERVICE_8080_CHECK_HTTP=/health - SERVICE_25000_NAME=test-server-thrift-service - SERVICE_25000_TAGS=test-server-thrift-service-03 - SERVICE_25000_CHECK_INTERVAL=10s - SERVICE_25000_CHECK_TIMEOUT=2s - SERVICE_25000_CHECK_TCP=/ ports: - "16000:8080" - "30000:25000" test_server_2: image: test-server:latest environment: - SERVICE_8080_NAME=test-server-http-service - SERVICE_8080_TAGS=test-server-http-service-04 - SERVICE_8080_CHECK_INTERVAL=10s - SERVICE_8080_CHECK_TIMEOUT=2s - SERVICE_8080_CHECK_HTTP=/health - SERVICE_25000_NAME=test-server-thrift-service - SERVICE_25000_TAGS=test-server-thrift-service-04 - SERVICE_25000_CHECK_INTERVAL=10s - SERVICE_25000_CHECK_TIMEOUT=2s - SERVICE_25000_CHECK_TCP=/ ports: - "18000:8080" - "32000:25000" test_client_1: image: test-client:latest environment: - SERVICE_8080_NAME=my-web-server - SERVICE_8080_TAGS=test-client-http-service-02 - SERVICE_8080_CHECK_INTERVAL=10s - SERVICE_8080_CHECK_TIMEOUT=2s - SERVICE_8080_CHECK_HTTP=/features ports: - "80:8080"
- 主机:192.168.1.185
docker-compose.yml
version: '2' services: consul_client_03: image: consul:latest ports: - "8300:8300" - "8301:8301" - "8301:8301/udp" - "8302:8302" - "8302:8302/udp" - "8400:8400" - "8500:8500" - "8600:8600" command: consul agent -retry-join 192.168.1.181 -advertise 192.168.1.185 -node consul_client_03 -data-dir /tmp/data-dir -client 0.0.0.0 -ui registrator: image: gliderlabs/registrator:latest volumes: - "/var/run/docker.sock:/tmp/docker.sock" command: -ip 192.168.1.185 consul://192.168.1.185:8500 test_server_1: image: test-server:latest environment: - SERVICE_8080_NAME=test-server-http-service - SERVICE_8080_TAGS=test-server-http-service-05 - SERVICE_8080_CHECK_INTERVAL=10s - SERVICE_8080_CHECK_TIMEOUT=2s - SERVICE_8080_CHECK_HTTP=/health - SERVICE_25000_NAME=test-server-thrift-service - SERVICE_25000_TAGS=test-server-thrift-service-05 - SERVICE_25000_CHECK_INTERVAL=10s - SERVICE_25000_CHECK_TIMEOUT=2s - SERVICE_25000_CHECK_TCP=/ ports: - "16000:8080" - "30000:25000" test_server_2: image: test-server:latest environment: - SERVICE_8080_NAME=test-server-http-service - SERVICE_8080_TAGS=test-server-http-service-06 - SERVICE_8080_CHECK_INTERVAL=10s - SERVICE_8080_CHECK_TIMEOUT=2s - SERVICE_8080_CHECK_HTTP=/health - SERVICE_25000_NAME=test-server-thrift-service - SERVICE_25000_TAGS=test-server-thrift-service-06 - SERVICE_25000_CHECK_INTERVAL=10s - SERVICE_25000_CHECK_TIMEOUT=2s - SERVICE_25000_CHECK_TCP=/ ports: - "18000:8080" - "32000:25000" test_client_1: image: test-client:latest environment: - SERVICE_8080_NAME=my-web-server - SERVICE_8080_TAGS=test-client-http-service-03 - SERVICE_8080_CHECK_INTERVAL=10s - SERVICE_8080_CHECK_TIMEOUT=2s - SERVICE_8080_CHECK_HTTP=/features ports: - "80:8080"
注意:我们使用的第三方镜像
liberalman/nginx-consul-template
,Nginx
会把名称为my-web-server
的服务容器作为后台转发的目标服务器,因此,在test-client
的配置项中,需要指定SERVICE_XXX_NAME
为my-web-server
。当然你也可以自己制作镜像指定模板。
(b). 在三台主机上使用 docker-compose
启动多容器应用:
docker-compose up -d
以主机 192.168.1.182
为例 (其余两台类似),控制台日志显示,创建并启动 docker-compose.yml
文件配置的5个容器实例。
(c). 查看正常启动的容器进程,观察到 Consul
、一台test-client
和 两台test-server
的容器都正常启动。
(d). 在 b
操作中的控制台输出可以看到:docker-compose
并非按照 docker-compose.yml
文件中服务配置的先后顺序启动。 registrator
容器的启动依赖于 consul
容器,而此时 consul
还并未启动,就出现了 registrator
优先启动而异常退出的现象。解决方法是再运行一次 docker-compose up -d
命令。
(e). 再次查看容器进程,此时 Registrator
容器就已经正常启动了。
(f). 以相同的方式在其余两台主机上重复以上操作,再次访问 http://IP:8500
查看 Consul Server
的节点信息和服务注册列表。
Consul
集群节点信息,包括两台Consul Server
节点和一台Consul Client
节点,节点右侧可以看到所有的服务注册列表和相关的健康检查结果:
[图片上传失败...(image-af895-1536936846870)]
nginx
服务状态列表,服务名称nginx-consul-template
,提供http
服务,共有2个服务实例:
test-client
服务状态列表,服务名称为my-web-server
,提供http
服务,共有3个服务实例:
test-server
服务状态列表,服务名称为test-server-http-service
和test-server-thrift-service
,分别对应6个http
服务实例和 6个thrift
服务实例:
三台 Consul Client
主机上的容器服务实例均正常启动,服务注册和发现运行正常!
4. 结果验证
4.1. Nginx负载均衡
4.1.1. 访问Nginx
Nginx
默认访问端口号为80
,任选一台 Nginx
访问,比如: http://192.168.1.181/swagger-ui.html
。
请求转发至 Test Client
的 Swagger
页面,表明 nginx
配置文件 nginx.conf
被 Consul-template
成功修改。
4.1.2. 进入Nginx容器
运行 docker ps
查看 nginx-consul-template
的容器 ID
,比如这里是:4f2731a7e0cb
。进入 nginx-consul-template
容器。
docker-enter 4f2731a7e0cb
查看容器内部的进程列表:
特别留意以下一行进程命令,这里完成了三步重要的操作:
consul-template -consul-addr=consul:8500 -template /etc/consul-templates/nginx.conf.ctmpl:/etc/nginx/conf.d/app.conf:nginx -s reload
Consul-template
利用Consul
上的服务信息对Nginx
的配置文件模板/etc/consul-templates/nginx.conf.ctmpl
进行重新解析和渲染。- 渲染生成的
nginx
配置文件为/etc/nginx/conf.d/app.conf
。 - 进一步运行
nginx -s reload
重新加载app.conf
,更新路由转发列表。
查看 app.conf
的配置项,发现三个 test-client
节点的 IP:port
都加入了路由转发列表中。
退出并关闭主机 192.168.1.182
上的 test-client
容器。
再次查看 app.conf
,可以发现路由节点 192.168.1.182:80
已经从 Nginx
的路由转发列表上剔除掉了。
同样的,重新启动 test-client
恢复容器,又可以发现 Nginx
的路由转发列表 再次自动将其添加!
4.2. 服务负载均衡
4.2.1. 接口测试
test-client
通过 http
通信方式请求任意一台 test-server
,返回响应结果 (请求处理时间 ms
)。
test-client
通过 thrift
通信方式请求任意一台 test-server
,返回响应结果 (请求处理时间 ms
)。
4.2.3. 日志分析
服务的负载均衡并不是很好观察,这里直接截取了一段 test-client
的服务缓存列表动态定时刷新时打印的日志:
2018-02-09 13:15:55.157 INFO 1 --- [erListUpdater-1] t.c.l.ThriftConsulServerListLoadBalancer : Refreshed thrift serverList: [ test-server-thrift-service: [ ThriftServerNode{node='consul_client_01', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-01], host='192.168.1.182', port=30000, address='192.168.1.182', isHealth=true}, ThriftServerNode{node='consul_client_01', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-02], host='192.168.1.182', port=32000, address='192.168.1.182', isHealth=true}, ThriftServerNode{node='consul_client_02', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-03], host='192.168.1.183', port=30000, address='192.168.1.183', isHealth=true}, ThriftServerNode{node='consul_client_02', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-04], host='192.168.1.183', port=32000, address='192.168.1.183', isHealth=true}, ThriftServerNode{node='consul_client_03', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-05], host='192.168.1.185', port=30000, address='192.168.1.185', isHealth=true}, ThriftServerNode{node='consul_client_03', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-06], host='192.168.1.185', port=32000, address='192.168.1.185', isHealth=true} ], test-server-http-service: [ ThriftServerNode{node='consul_client_01', serviceId='test-server-http-service', tags=[test-server-http-service-01], host='192.168.1.182', port=16000, address='192.168.1.182', isHealth=true}, ThriftServerNode{node='consul_client_01', serviceId='test-server-http-service', tags=[test-server-http-service-02], host='192.168.1.182', port=18000, address='192.168.1.182', isHealth=true}, ThriftServerNode{node='consul_client_02', serviceId='test-server-http-service', tags=[test-server-http-service-03], host='192.168.1.183', port=16000, address='192.168.1.183', isHealth=true}, ThriftServerNode{node='consul_client_02', serviceId='test-server-http-service', tags=[test-server-http-service-04], host='192.168.1.183', port=18000, address='192.168.1.183', isHealth=true}, ThriftServerNode{node='consul_client_03', serviceId='test-server-http-service', tags=[test-server-http-service-05], host='192.168.1.185', port=16000, address='192.168.1.185', isHealth=true}, ThriftServerNode{node='consul_client_03', serviceId='test-server-http-service', tags=[test-server-http-service-06], host='192.168.1.185', port=18000, address='192.168.1.185', isHealth=true} ], my-web-server: [ ThriftServerNode{node='consul_client_01', serviceId='my-web-server', tags=[test-client-http-service-01], host='192.168.1.182', port=80, address='192.168.1.182', isHealth=true}, ThriftServerNode{node='consul_client_02', serviceId='my-web-server', tags=[test-client-http-service-02], host='192.168.1.183', port=80, address='192.168.1.183', isHealth=true}, ThriftServerNode{node='consul_client_03', serviceId='my-web-server', tags=[test-client-http-service-03], host='192.168.1.185', port=80, address='192.168.1.185', isHealth=true} ]]
服务实例
test-server-http-service
所有健康的服务实例:
服务IP地址 | 服务端口 | 服务标签 |
---|---|---|
192.168.1.182 | 16000 | test-server-http-service-01 |
192.168.1.182 | 18000 | test-server-http-service-02 |
192.168.1.183 | 16000 | test-server-http-service-03 |
192.168.1.183 | 18000 | test-server-http-service-04 |
192.168.1.185 | 16000 | test-server-http-service-05 |
192.168.1.185 | 18000 | test-server-http-service-06 |
test-server-thrift-service
所有健康的服务实例:
服务IP地址 | 服务端口 | 服务标签 |
---|---|---|
192.168.1.182 | 30000 | test-server-thrift-service-01 |
192.168.1.182 | 32000 | test-server-thrift-service-02 |
192.168.1.183 | 30000 | test-server-thrift-service-03 |
192.168.1.183 | 32000 | test-server-thrift-service-04 |
192.168.1.185 | 30000 | test-server-thrift-service-05 |
192.168.1.185 | 32000 | test-server-thrift-service-06 |
my-web-server
所有健康的服务实例:
服务IP地址 | 服务端口 | 服务标签 |
---|---|---|
192.168.1.182 | 80 | test-client-http-service-01 |
192.168.1.183 | 80 | test-client-http-service-02 |
192.168.1.185 | 80 | test-client-http-service-03 |
spring-cloud-starter-thrift
采用的轮询的转发策略,也就是说 my-web-server
会按次序循环往来地将 http
或者 rpc
请求分发到各自的 6
个服务实例完成处理。
总结
本文提供了一套基于微服务服务注册与发现体系和容器的高可用 (HA
) 解决方案,引入了接入层和服务层的自动负载均衡的实现,详细给出了实践方案和技术手段!