100、神器的 routing mesh (Swarm07)

 
上一节我们提到了 swarm 的 routing mesh 。当外部访问任意节点的8080端口时,swarm 内部的 load balance 会将请求转发给web_server 其中的一个副本。大概如下图所示:
 
 
所以,无论访问哪个节点,及时该节点上没有运行Service的副本,最终都能访问到Service。
 
另外,我们还可以配置一个外部的 load balance ,将请求路由到 swarm Service 。比如配置HAProxy,将请求分发到各个节点的8080端口。
如下图所示,一共是两层 load balance。
 
 
ingress 网络
 
当我们应用 --publis-add 8080:80 时,swarm 会重新配置service,我们会看到容器发生了以下变化
 
root@host03:~# docker service create --name web_server --replicas 2 tomcat
v2p0yuexiws1g1qs2xj5hmqte
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged
root@host03:~# docker service ps web_server
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS
be17grtfyj33        web_server.1    tomcat:latest       host02              Running             Shutdown 11 seconds ago                       
wsfub26hj5tq        web_server.2    tomcat:latest       host01              Running             Shutdown 14 seconds ago           
 
root@host01:~# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
1bb657e47d1f        tomcat:latest       "catalina.sh run"   16 seconds ago      Up 14 seconds       8080/tcp            web_server.2.wsfub26hj5tqvufrtvir6nmip
 
root@host02:~# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
1183f0b73cba        tomcat:latest       "catalina.sh run"   19 seconds ago      Up 17 seconds       8080/tcp            web_server.1.be17grtfyj33nbhyyrhe8dh4c
 
root@host03:~# docker service update --publish-add  8080:8080 web_server
web_server
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged
 
root@host03:~# docker service ps web_server
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS
0y0wh18zbw0l        web_server.1        tomcat:latest       host02              Running             10 seconds ago                        
be17grtfyj33         \_ web_server.1    tomcat:latest       host02              Shutdown            Shutdown 11 seconds ago                       
lvhvtel79hr8        web_server.2        tomcat:latest       host01              Running             Running 13 seconds ago                        
wsfub26hj5tq         \_ web_server.2    tomcat:latest       host01              Shutdown            Shutdown 14 seconds ago           
 
root@host01:~# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                        PORTS               NAMES
2fa7f422dfb6        tomcat:latest       "catalina.sh run"   24 seconds ago      Up 21 seconds                 8080/tcp            web_server.2.lvhvtel79hr8yw8fzeoaib0u9
1bb657e47d1f        tomcat:latest       "catalina.sh run"   2 minutes ago       Exited (143) 22 seconds ago                       web_server.2.wsfub26hj5tqvufrtvir6nmip
 
root@host02:~# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                        PORTS               NAMES
b1ba99790519        tomcat:latest       "catalina.sh run"   23 seconds ago      Up 20 seconds                 8080/tcp            web_server.1.0y0wh18zbw0lhpks3zbrqdprc
1183f0b73cba        tomcat:latest       "catalina.sh run"   2 minutes ago       Exited (143) 21 seconds ago                       web_server.1.be17grtfyj33nbhyyrhe8dh4c
 
root@host01:~# docker exec web_server.2.lvhvtel79hr8yw8fzeoaib0u9 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
1141: eth0@if1142: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:ff:00:0c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.255.0.12/16 brd 10.255.255.255 scope global eth0
       valid_lft forever preferred_lft forever
1143: eth1@if1144: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 172.18.0.3/16 brd 172.18.255.255 scope global eth1
       valid_lft forever preferred_lft forever
 
root@host02:~# docker exec web_server.1.0y0wh18zbw0lhpks3zbrqdprc ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
1237: eth0@if1238: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:ff:00:0d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.255.0.13/16 brd 10.255.255.255 scope global eth0
       valid_lft forever preferred_lft forever
1239: eth1@if1240: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 172.18.0.3/16 brd 172.18.255.255 scope global eth1
       valid_lft forever preferred_lft forever
 
root@host01:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
d4b0862f06c8        bridge              bridge              local
d883d53943b2        docker_gwbridge     bridge              local
358643b57976        host                host                local
sngp88bsqode        ingress             overlay             swarm
f5d2888b2321        none                null                local
 
root@host02:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
3fd2a4a2ffc0        bridge              bridge              local
d8c36dcdd92a        docker_gwbridge     bridge              local
3a8925305a80        host                host                local
sngp88bsqode        ingress             overlay             swarm
ce845103f54d        none                null                local
 
之前所有的副本都被shutdown了,然后启动了新的副本,在新的副本里面可以看到两块网卡
    eth0 连接 overlay 类型的网络,名字是 ingress,其作用是让运行在不同主机上的容器可以互相通信。
    eth1 连接的是一个 bridge 类型的网络,名字为 docker_gwbridge,其作用是让容器能都访问到外网
 
ingress 网络是swarm创建时 Docker 自动为我们创建的,swarm中每个 node 都能使用ingress 。
 
通过 overlay 网络,主机与容器,容器与容器之间可以互相访问。同时,routing mesh 将外部请求路由到不同主机的容器,从而实现了外部网络对service的访问。
 

猜你喜欢

转载自www.cnblogs.com/www1707/p/10872748.html