[Docker] 12. Docker consul cluster construction, microservice deployment, Consul cluster + Swarm cluster deployment microservice practice

1. Docker consul cluster construction

Consul is an open source service discovery software written in Go language. Consul has functions such as service discovery, health check, service governance, microservice circuit breaker processing , etc. I have talked about how to build a consul cluster in microservices. Next, let’s take a look at how to do it in Dokcer . Create and build consul cluster

1. Deploy consul cluster on linux

Refer to [golang microservices] 5. Introduction to microservice service discovery, installation and use of consul, Consul cluster 

2. Deploy consul cluster on Docker

(1). Detailed explanation of Dokcer Consul parameters

-net=host

        The docker parameter allows the docker container to bypass the isolation of the net namespace and eliminate the need to manually specify port mapping.

-server
        Consul supports running in server or client mode . Server is the core of the service discovery module , and client is mainly used to forward requests.
-advertise
        Pass native private IP to consul
-retry-join
         Specify the address of the consul node to be added . It will try again after failure . You can specify different addresses multiple times.
-client
        Specify which client address consul is bound to. This address can provide HTTP , DNS , RPC and other services. The default is
        127.0.0.1
-bind
        Bind the IP address of the server. This address is used for communication within the cluster. All nodes in the cluster must be reachable to the address.
        , the default is 0.0.0.0
-allow_stale
        Set to true to indicate that dns information can be obtained from any server node in the consul cluster , false to indicate that each request will go through the server leader of consul
-bootstrap-expect
        The expected number of servers in the data center. Once specified, Consul will wait for the specified number of servers to be available before starting the cluster. 
        Set, allows automatic leader election , but cannot be used with the traditional -bootstrap flag and needs to be run in server mode
-data-dir
        The location where the data is stored, used to persist the cluster state.
-node
        The name of this node in the cluster, this must be unique within the cluster, by default it is the hostname of the node
-config-dir
        Specify the configuration file. When there is a file ending with .json in this directory, it will be loaded.
-enable-script-checks
        Check whether the service is active, similar to turning on heartbeat
-datacenter
        Data center name
-ui
        Open ui interface
-join
        Specify IP and join the existing cluster

(2). Start the first node consul1

Official website: https://hub.docker.com/_/consul, you need to download consul first. Here is a demonstration using the server 192.168.31.241:

[root@manager_241 ~]# docker pull consul
Using default tag: latest
Error response from daemon: manifest for consul:latest not found: manifest unknown: manifest unknown

The problem above is that consul does not seem to have the latest version, so you need to specify the version to download.

[root@manager_241 ~]# docker pull consul:1.14.1
1.14.1: Pulling from library/consul
9621f1afde84: Pull complete 
...
92968d126abf: Pull complete 
Digest: sha256:d8f44192b5c1df18df4e7cebe5b849e005eae2dea24574f64a60a2abd24a310e
Status: Downloaded newer image for consul:1.14.1
docker.io/library/consul:1.14.1
[root@manager_241 ~]# docker images
REPOSITORY             TAG       IMAGE ID       CREATED         SIZE
gowebimg               v1.0.1    be3c1ee42ce2   2 days ago      237MB
mycentos               v1        4ba38cf3943b   3 days ago      434MB
nginx                  latest    a6bd71f48f68   3 days ago      187MB
6053537/portainer-ce   latest    b9c565f94ccc   4 weeks ago     322MB
mysql                  latest    a3b6608898d6   4 weeks ago     596MB
consul                 1.14.1    8540a77af6e2   12 months ago   149MB

Start the first node consul1 (create a consul service/container):

docker run --name consul1 -d -p 8500:8500 -p 8300:8300 -p 8301:8301 -p 8302:8302 -p 8600:8600 consul agent -server -bootstrap-expect=3 -ui -bind=0.0.0.0 -client=0.0.0.0

或者

docker run --name consul1 -d -p 8500:8500 consul agent -server -bootstrapexpect=3 -ui -bind=0.0.0.0 -client=0.0.0.0

Parameter Description;

        --name consul1 specifies the name of the consul container to be started as consul1,

        -p exposed port

        consul runs a consul container through a consul image

        agent -server means starting a consul server

        -d runs in the background

        -bootstrap-expect=3 Number of consul containers to start

        -ui can be accessed via the web

        -bind binding ip address

        -clien=0.0.0.0 means all clients can join

        

Port description:

        8500 : http port, used for http interface and web ui access

        8300 : server rpc port. Consul servers in the same data center communicate through this port.

        8301 : serf lan port, the same data center consul client communicates through this port ; used to process the current
LAN gossip communication                     in datacenter
        8302 : serf wan port, consul servers in different data centers communicate through this port ; agent server uses
Used to handle gossip communication                     with other datacenters
        8600 : dns port, used for registered service discovery     

The specific commands are as follows:

[root@worker_241 ~]# docker pull consul
Using default tag: latest
Error response from daemon: manifest for consul:latest not found: manifest unknown: manifest unknown
[root@worker_241 ~]# 

It is found that consul cannot be downloaded. This is because there is no latest version of consul. You need to specify the specific version of consul to download. Search here to check: 

[root@worker_241 ~]# docker search consul
NAME                                         DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
consul                                       Consul is a datacenter runtime that provides…   1427      [OK]       
hashicorp/consul-template                    Consul Template is a template renderer, noti…   29                   
hashicorp/consul                             Automatic build of consul based on the curre…   53                   [OK]

Hashicorp/consul is used here, the effect is the same as consul, download:

[root@worker_241 ~]# docker pull hashicorp/consul
Using default tag: latest
latest: Pulling from hashicorp/consul
96526aa774ef: Pull complete 
8a755a53c1aa: Pull complete 
fd305fe2d878: Pull complete 
01d12fe0b370: Pull complete 
cbc103c13062: Pull complete 
4f4fb700ef54: Pull complete 
3a5b5f5fe822: Pull complete 
Digest: sha256:712fe02d2f847b6a28f4834f3dd4095edb50f9eee136621575a1e837334aaf09
Status: Downloaded newer image for hashicorp/consul:latest
docker.io/hashicorp/consul:latest
[root@worker_241 ~]# 
[root@worker_241 ~]# docker images
REPOSITORY         TAG       IMAGE ID       CREATED       SIZE
gowebimg           v1.0.1    be3c1ee42ce2   4 days ago    237MB
nginx              <none>    a6bd71f48f68   5 days ago    187MB
hashicorp/consul   latest    48de899edccb   3 weeks ago   206MB
mysql              latest    a3b6608898d6   4 weeks ago   596MB

Create and start consul1

[root@worker_241 ~]# docker run --name consul1 -d -p 8500:8500 -p 8300:8300 -p 8301:8301 -p 8302:8302 -p 8600:8600 hashicorp/consul agent -server -bootstrap-expect=3 -ui -bind=0.0.0.0 -client=0.0.0.0
2550fb171015d39dccad2b62379259337ee78d074536fd6d6e3383c12c71b113
[root@worker_241 ~]#  
[root@worker_241 ~]# docker ps
CONTAINER ID   IMAGE              COMMAND                  CREATED          STATUS          PORTS                                                                                                                                                                          NAMES
2550fb171015   hashicorp/consul   "docker-entrypoint.s…"   26 seconds ago   Up 14 seconds   0.0.0.0:8300-8302->8300-8302/tcp, :::8300-8302->8300-8302/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp, :::8500->8500/tcp, 0.0.0.0:8600->8600/tcp, :::8600->8600/tcp, 8600/udp   consul1

After starting the first node, you need to start other consul containers, join this consul1, and check the IP address of consul1 . The command is as follows: 

[root@worker_241 ~]# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
66d69474d739   bridge            bridge    local
c2191211eabb   docker_gwbridge   bridge    local
31616bf730aa   host              host      local
1e0116a4c2da   none              null      local
[root@worker_241 ~]# docker inspect 66d69474d739
[
    {
        "Name": "bridge",
        "Id": "66d69474d739b7833552f10f0d7c2cc204fd89874fbb9b322bdb6ccf8f8e88cd",
        "Created": "2023-11-25T20:06:24.898621409-08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "2550fb171015d39dccad2b62379259337ee78d074536fd6d6e3383c12c71b113": {
                "Name": "consul1",
                "EndpointID": "870bf26516364ba1fecfcab8e41188c9366c53d4a774bdd69704ccfbe63a9a61",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

It was found that the IP address of consul1 is: 172.17.0.2

(3). Start the second node (port 8501 ) and join consul1

The startup command is the same as the command to build a consul cluster in Linux . -join The ip is the ip address of the first consul1 node ( 172.17.0.2 )

docker run --name consul2 -d -p 8501:8500 hashicorp/consul agent -server -ui -bootstrap-expect=3 -bind=0.0.0.0 -client=0.0.0.0 -join 172.17.0.2
[root@worker_241 ~]# docker run --name consul2 -d -p 8501:8500 hashicorp/consul agent -server -ui -bootstrap-expect=3 -bind=0.0.0.0 -client=0.0.0.0 -join 172.17.0.2
8d0cbaf78af0be8a8e81e95ccf63508b814dafee83870048ee74120f75e9bc09

 (3). Start the third node (port 8502 ) and join consul1

The startup command is the same as the command to build a consul cluster in Linux.

docker run --name consul3 -d -p 8502:8500 hashicorp/consul agent -server -ui -bootstrap-expect=3 -bind=0.0.0.0 -client=0.0.0.0 -join 172.17.0.2
[root@worker_241 ~]# docker run --name consul3 -d -p 8502:8500 hashicorp/consul agent -server -ui -bootstrap-expect=3 -bind=0.0.0.0 -client=0.0.0.0 -join 172.17.0.2
9e6e1b46f9caa2f8675c0c56b9e68fc7bca6c341f28b5f4ff12ba53a431e7463

(4). Start a console client (port 8503 ) to join consul1

docker run --name consulClient1 -d -p 8503:8500 hashicorp/consul agent -ui -bind=0.0.0.0 -client=0.0.0.0 -join 172.17.0.2
[root@worker_241 ~]# docker run --name consulClient1 -d -p 8503:8500 hashicorp/consul agent -ui -bind=0.0.0.0 -client=0.0.0.0 -join 172.17.0.2
44d7066b26dae726309e1c21a7bde4b31099258e728d175bbef7cb1dc7e40398

(5) .View consul

docker ps

docker exec -it consul1 consul members

[root@worker_241 ~]# docker ps 
CONTAINER ID   IMAGE              COMMAND                  CREATED          STATUS          PORTS                                                                                                                                                                          NAMES
44d7066b26da   hashicorp/consul   "docker-entrypoint.s…"   2 minutes ago    Up 2 minutes    8300-8302/tcp, 8301-8302/udp, 8600/tcp, 8600/udp, 0.0.0.0:8503->8500/tcp, :::8503->8500/tcp                                                                                    consulClient1
9e6e1b46f9ca   hashicorp/consul   "docker-entrypoint.s…"   3 minutes ago    Up 3 minutes    8300-8302/tcp, 8301-8302/udp, 8600/tcp, 8600/udp, 0.0.0.0:8502->8500/tcp, :::8502->8500/tcp                                                                                    consul3
8d0cbaf78af0   hashicorp/consul   "docker-entrypoint.s…"   4 minutes ago    Up 4 minutes    8300-8302/tcp, 8301-8302/udp, 8600/tcp, 8600/udp, 0.0.0.0:8501->8500/tcp, :::8501->8500/tcp                                                                                    consul2
2550fb171015   hashicorp/consul   "docker-entrypoint.s…"   18 minutes ago   Up 18 minutes   0.0.0.0:8300-8302->8300-8302/tcp, :::8300-8302->8300-8302/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp, :::8500->8500/tcp, 0.0.0.0:8600->8600/tcp, :::8600->8600/tcp, 8600/udp   consul1

 It can also be accessed via web: 192.168.31.240:8500

 

Okay, the consul cluster is set up. It is built on one machine . If the concurrency of the microservice is not very large, it can be operated on one machine (benefits: through bridge bridging, containers can communicate with each other) , the default is in a consul cluster, sharing the same network). Of course, if the amount of concurrency is relatively large, you need to deploy consul on multiple servers. Let’s deploy consul on multiple machines.

2. Practical deployment of microservices in Consul cluster + Swarm cluster

1. Deploy consul on multiple servers

(1). Deploy consul cluster on linux

Refer to [golang microservices] 5. Introduction to microservice service discovery, installation and use of consul, Consul cluster 

2. Build a Consul cluster  through docker deployment

It can be set up through docker run --net=host shared Network mode (shared physical machine IP address), so that when running consul on the physical machine, there is no need to expose the port, and the physical machine IP communicates with each other. Here, the machine is 192.168.31.117 For example, the command is as follows:

docker run --net=host -e CONSUL_BIND_INTERFACE=ens33 -h=192.168.31.117 --name consul1 -v /consul_server/data:/consul/data consul agent -server -bootstrap-expect=3 -ui -bind=192.168.31.117 -client=0.0.0.0

       --net=host shared Network mode ( shared physical machine IP address )

        -e constant configuration: 

                CONSUL_BIND_INTERFACE=ens33 bound network card

        -h 192.168.31.117 physical machine ip

       --name consul1 consul container name

        -v mapped data volume

                /consul_server/data:/consul/data maps the current data data directory to consul/data

        consu l Start the container from the consul image (you can also start from other consul images you downloaded, usually the official consul image)

        agent -server starts a server server service

        Other parameters are consistent with those explained above

After the above command is run, it will be executed with one value. If you want to run it in the background, the command is as follows:

nohup docker run --net=host -e CONSUL_BIND_INTERFACE=ens33 -h=192.168.31.117 --name consul1 -v /consul_server/data:/consul/data hashicorp/consul agent -server -bootstrap-expect=3 -ui -bind=192.168.31.117 -client=0.0.0.0 &

In this way , a consul container service is started on the machine 192.168.31.117 . It shares the physical machine IP and is bound to the network card, so that the consul container can be accessed through the IP.

The specific commands are as follows:

        Here we use the hashicorp/consul image to build the consul container.

[root@worker_117 ~]# docker pull hashicorp/consul
Using default tag: latest
latest: Pulling from hashicorp/consul
96526aa774ef: Pull complete 
...
Digest: sha256:712fe02d2f847b6a28f4834f3dd4095edb50f9eee136621575a1e837334aaf09
Status: Downloaded newer image for hashicorp/consul:latest
docker.io/hashicorp/consul:latest
[root@worker_117 ~]# docker images
REPOSITORY         TAG       IMAGE ID       CREATED       SIZE
hashicorp/consul   latest    48de899edccb   3 weeks ago   206MB

[root@worker_117 ~]# nohup docker run --net=host -e CONSUL_BIND_INTERFACE=ens33 -h=192.168.31.117 --name consul1 -v /consul_server/data:/consul/data hashicorp/consul agent -server -bootstrap-expect=3 -ui -bind=192.168.31.117 -client=0.0.0.0 &

[root@worker_117 ~]# 
[root@worker_117 ~]# docker ps
CONTAINER ID   IMAGE              COMMAND                  CREATED          STATUS          PORTS      NAMES
d66621ff05c1   hashicorp/consul   "docker-entrypoint.s…"   11 seconds ago   Up 9 seconds               consul1

In this way, a consul has been built. Next, build consul on several other machines and join the consul cluster 192.168.31.117:

nohup docker run --net=host -e CONSUL_BIND_INTERFACE=ens33 -h=192.168.31.140 --name consul2 -v /consul_server/data:/consul/data hashicorp/consul agent -server -bootstrap-expect=3 -ui -bind=192.168.31.140 -client=0.0.0.0 -join 192.168.31.117 &

nohup docker run --net=host -e CONSUL_BIND_INTERFACE=ens33 -h=192.168.31.81 --name consul3 -v /consul_server/data:/consul/data hashicorp/consul agent -server -bootstrap-expect=3 -ui -bind=192.168.31.81 -client=0.0.0.0 -join 192.168.31.117 &

nohup docker run --net=host -e CONSUL_BIND_INTERFACE=ens33 -h=192.168.31.241 --name consul4 -v /consul_server/data:/consul/data hashicorp/consul agent -bind=192.168.31.241 -client=0.0.0.0 -join 192.168.31.117 &

192.168.31.140 joins consul1 cluster 

[root@worker_140 ~]# nohup docker run --net=host -e CONSUL_BIND_INTERFACE=ens33 -h=192.168.31.140 --name consul2 -v /consul_server/data:/consul/data hashicorp/consul agent -server -bootstrap-expect=3 -ui -bind=192.168.31.140 -client=0.0.0.0 -join 192.168.31.117 &
[1] 10150

[root@worker_140 ~]# docker ps
CONTAINER ID   IMAGE              COMMAND                  CREATED          STATUS          PORTS     NAMES
1e54ad577bce   hashicorp/consul   "docker-entrypoint.s…"   11 seconds ago   Up 10 seconds             consul2

192.168.31.81 joins consul1 cluster 

[root@manager_81 ~]# nohup docker run --net=host -e CONSUL_BIND_INTERFACE=ens33 -h=192.168.31.81 --name consul3 -v /consul_server/data:/consul/data hashicorp/consul agent -server -bootstrap-expect=3 -ui -bind=192.168.31.81 -client=0.0.0.0 -join 192.168.31.117 & 
[root@manager_81 ~]# docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS                                                           NAMES
fbf11320c18e   hashicorp/consul       "docker-entrypoint.s…"   2 minutes ago    Up 2 minutes                                                                    consul3

192.168.31.241 forwards as a client and joins the consul1 cluster 

[root@worker_241 ~]# nohup docker run --net=host -e CONSUL_BIND_INTERFACE=ens33 -h=192.168.31.241 --name consul4 -v /consul_server/data:/consul/data hashicorp/consul agent -bind=192.168.31.241 -client=0.0.0.0 -join 192.168.31.117 &
[1] 17681
[root@worker_241 ~]# docker ps
CONTAINER ID   IMAGE              COMMAND                  CREATED          STATUS          PORTS      NAMES
d2b0e76b4a13   hashicorp/consul   "docker-entrypoint.s…"   4 seconds ago    Up 1 second                consul4

Can be viewed on the web:

It can be viewed on http://192.168.31.140:8500/ , http://192.168.31.117:8500/ , http://192.168.31.81:8500/ .

In this way, the consul cluster is deployed through docker, and the consul container can be accessed through the physical machine IP, and these machines can communicate. 

Next, let’s build the api service gateway and microservice cluster . This part can be built  through docker swarm.

3. Build a microservice cluster through docker swarm

(1). Prepare mysql and redis related databases

Here you can find a server for deployment. Of course, you can use the cloud service database online. Here, you can deploy it through docker.

Start mysql

docker run --name myMysql -p 3306:3306 -v
/root/mysql/conf.d:/etc/mysql/conf.d -v /root/mysql/data:/var/lib/mysql -e
MYSQL_ROOT_PASSWORD=123456 -d mysql

Start redis 

docker run \
-p 6379:6379 \
--name redis \
-v /docker/redis/redis.conf:/etc/redis/redis.conf \
-v /docker/redis/data:/data \
--restart=always \
-d redis redis-server /etc/redis/redis.conf

 Detailed explanation of the above command:

docker run \
-p 6379:6379 \ docker与宿主机的端口映射
--name redis \ redis容器的名字
-v /docker/redis/redis.conf:/etc/redis/redis.conf \ 挂载redis.conf文件
-v /docker/redis/data:/data \ 挂在redis的持久化数据
--restart=always \ 设置redis容器随docker启动而自启动
-d redis redis-server /etc/redis/redis.conf \ 指定redis在docker中的配置文件路径,后
台启动redis

(2).Preparation procedures

Perform the following operations on all servers where microservices need to be deployed. Here we deploy three servers: 192.168.31.129, 192.168.31.132, and 192.168.31.130.

1).Packaging projects

Here we use the previous Gin project rbac microservice and captcha microservice as examples to explain. It is necessary to package the rbac microservice code , captcha microservice code, and microservice client code. For specific packaging methods, see the previous chapter [Docker] 6. Docker automatically deploys nodejs and golang project

 

Package the above microservices and client code and transfer them to the server

Notice:

        You need to pass in the packaged files and required configuration files, such as: app.ini, statics, view and other static resource files

Here, we use three machines : 192.168.31.129 , 192.168.31.132, and 192.168.31.130 to deploy microservices and upload the packaged projects to these three servers.

When packaging, you need to pay attention to: The app.ini in the microservice needs to modify the consul configuration address to the above consul client address, and the mysql configuration needs to be modified to the mysql address built by yourself.

2). Compress the packaged files  
 3) .Configure Dockerfile
Microservice captcha

captcha_Dockerfile

FROM centos:centos7
ADD /wwwroot/captcha.tar.gz /root
WORKDIR /root
RUN chmod -R 777 captcha
WORKDIR /root/captcha
ENTRYPOINT ["./captcha"]
Microservice rbac

rbac_Dockerfile

FROM centos:centos7
ADD /wwwroot/rbac.tar.gz /root
WORKDIR /root
RUN chmod -R 777 rbac
WORKDIR /root/rbac
ENTRYPOINT ["./rbac"]
ginshop

ginshop_Dockerfile

FROM centos:centos7
ADD /wwwroot/ginshop.tar.gz /root
WORKDIR /root
RUN chmod -R 777 ginshop
WORKDIR /root/ginshop
ENTRYPOINT ["./ginshop"]

4) .Build the corresponding image on the server to be deployed 
docker build -f captcha_Dockerfile -t docker.io/captchamicro:latest .
docker build -f rbac_Dockerfile -t docker.io/rbacmicro:latest .
docker build -f ginshop_Dockerfile -t docker.io/ginshop:latest .

After executing the above code on three servers, you can view the generated image files through docker images 

(3) .Configure docker-compose.yml

version: "3"
services:
  redis: #配置redis,这里可以单独配置redis,把redis放到专门的一台服务器上
    image: redis
    restart: always
    deploy:
        replicas: 1 #副本数量

  captcha_micro: #验证码微服务
    image: captchamicro #镜像名称:通过项目打包并build成的验证码微服务镜像
    restart: always
    deploy:
      replicas: 6 #副本数量
      resources: #资源
        limits: #配置cpu
          cpus: "0.3" # 设置该容器最多只能使用 30% 的 CPU
          memory: 500M # 设置该容器最多只能使用 500M内存
          restart_policy: #定义容器重启策略, 用于代替 restart 参数
          condition: on-failure #只有当容器内部应用程序出现问题才会重启

  rbac_micro: #rbac微服务
    image: rbacmicro
    restart: always
    deploy:
      replicas: 6 #副本数量
      resources: #资源
        limits: #配置cpu
          cpus: "0.3" # 设置该容器最多只能使用 30% 的 CPU
          memory: 500M # 设置该容器最多只能使用 500M内存
          restart_policy: #定义容器重启策略, 用于代替 restart 参数
          condition: on-failure #只有当容器内部应用程序出现问题才会重启
    depends_on:
      - captcha_micro

  ginshop: #客户端微服务
    image: ginshop
    restart: always
    ports:
      - 8080:8080
    deploy:
      replicas: 6 #副本数量
      resources: #资源
      limits: #配置cpu
        cpus: "0.3" # 设置该容器最多只能使用 30% 的 CPU
        memory: 500M # 设置该容器最多只能使用 500M内存
        restart_policy: #定义容器重启策略, 用于代替 restart 参数
        condition: on-failure #只有当容器内部应用程序出现问题才会重启
    depends_on:
      - rbac_micro

 

(4). Create clusters and deploy microservice clusters 

You can refer to: [Docker] 10. Docker Swarm explanation

#在192.168.31.129上面部署集群,命令如下:
docker swarm init --advertise-addr 192.168.31.129
[root@manager_129 ~]# docker swarm init --advertise-addr 192.168.31.129
Swarm initialized: current node (qu1ydd2t6occ8fo76rvaksidd) is now a manager.
 
To add a worker to this swarm, run the following command:
 
    docker swarm join --token SWMTKN-1-6afkz1ub7m8q37cehxmjiirs6a0r25qt1hzf0no1c0xcny55qc-d3gtv0qcsuhivozbomp4d73ha 192.168.31.129:2377
 
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
 
[root@manager_1291 ~]

The above docker swarm join is the command to join the cluster.

docker swarm join --token SWMTKN-1-6afkz1ub7m8q37cehxmjiirs6a0r25qt1hzf0no1c0xcny55qc-d3gtv0qcsuhivozbomp4d73ha 192.168.31.129:2377

Run the above command on 192.168.31.132 and 192.168.31.130 in sequence, so that these working nodes have joined the cluster. Run the command docker node ls on the management node to view all cluster nodes. Of course, you can also create more management nodes

(5). Deployment projects

Run the following naming in the docker-composer.yml directory to deploy microservices. You can refer to [Docker] 10. Docker Swarm explanation

 The project is deployed, and you can view the swarm cluster information through 192.168.31.129:8500

(6).Access project 

The project front-end can be accessed through 192.168.31.129:8080, and the project back-end can be accessed through 192.168.31.129:8080/admin.

 

 

(7). Configure nginx load balancing 

Configure nginx assignment balance to achieve true load balancing operation, see [Docker] 10. Docker Swarm explanation

Okay, the Docker consul cluster construction, microservice deployment, Consul cluster + Swarm cluster deployment microservice project is completed. It is endlessly produced. Please give it a thumbs up.

[Previous section] [Docker] 11. Docker Swarm cluster raft algorithm, Docker Swarm Web management tool 

Related chapters: [golang gin framework] 45.Gin mall project-microservice actual back-end Rbac microservice role permission association 

Guess you like

Origin blog.csdn.net/zhoupenghui168/article/details/134611262