Docker Advanced - Docker Swarm cluster and deploy applications

Create a Swarm cluster

Initialize management node

[root@k8s-master ~]# docker swarm init --advertise-addr 192.168.192.133
Swarm initialized: current node (vy95txqo3pglh478e4qew1h28) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-2kte699k2ldtsyklop1mvcg1ioekinv2nzoop9g83fu8vsrnms-87073ncbef748kvt6raj1mliy 192.168.192.133:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

[root@k8s-master ~]# 

Add working node

The firewalls of both the management node and the worker node must be turned off.

$ systemctl stpo firewalld

In the previous step, we initialized a Swarmcluster and had a management node. Next, we continue to execute the following commands on the two Docker hosts to create working nodes and add them to the cluster.

[root@localhost ~]# docker swarm join --token SWMTKN-1-2kte699k2ldtsyklop1mvcg1ioekinv2nzoop9g83fu8vsrnms-87073ncbef748kvt6raj1mliy 192.168.192.133:2377
This node joined a swarm as a worker.

View cluster

After the above two steps, we already have a minimal Swarmcluster, including a management node and a worker node.

Use it on the management node docker node lsto view the cluster

[root@k8s-master ~]# docker node ls
ID                            HOSTNAME                STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
o1j5nc40q4nqdoc4pvu86jgtu *   k8s-master              Ready               Active              Leader              18.06.1-ce
nnt0btawjgep1vn6kfftnp17m     localhost.localdomain   Ready               Active                                  18.06.1-ce

Deployment service

We use docker servicethe command to manage Swarmservices in the cluster. This command can only be run on the management node.

Create new service

SwarmRun a service called in the created cluster nginx.

[root@k8s-master ~]# docker service create --replicas 3 -p 80:80 --name nginx nginx:1.13.7-alpine
hpdegwsvi05bscs97wjrmv2dw
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service converged 

View services

Use docker service lsto view Swarmthe services currently running in the cluster.

[root@k8s-master ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                 PORTS
hpdegwsvi05b        nginx               replicated          3/3                 nginx:1.13.7-alpine   *:80->80/tcp

Use docker service psto view details about a service.

[root@k8s-master ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                 PORTS
hpdegwsvi05b        nginx               replicated          3/3                 nginx:1.13.7-alpine   *:80->80/tcp
[root@k8s-master ~]# docker service ps nginx
ID                  NAME                IMAGE                 NODE                    DESIRED STATE       CURRENT STATE                 ERROR                              PORTS
hhwngdta2dje        nginx.1             nginx:1.13.7-alpine   k8s-master              Running             Running about a minute ago                                       
kk3viynpynr2        nginx.2             nginx:1.13.7-alpine   k8s-master              Running             Running about a minute ago                                       
u6ddmugvrbpx         \_ nginx.2         nginx:1.13.7-alpine   localhost.localdomain   Shutdown            Rejected about a minute ago   "error creating external conne…"   
ju8twz34e4pk         \_ nginx.2         nginx:1.13.7-alpine   localhost.localdomain   Shutdown            Rejected about a minute ago   "error creating external conne…"   
qcnqsj1oh0ap         \_ nginx.2         nginx:1.13.7-alpine   localhost.localdomain   Shutdown            Rejected 2 minutes ago        "error creating external conne…"   
7csr1tjsiefl         \_ nginx.2         nginx:1.13.7-alpine   localhost.localdomain   Shutdown            Failed 2 minutes ago          "error creating external conne…"   
w6py7p1i8vfp        nginx.3             nginx:1.13.7-alpine   k8s-master              Running             Running about a minute ago                                       
u9o91wc94z64         \_ nginx.3         nginx:1.13.7-alpine   localhost.localdomain   Shutdown            Rejected 3 minutes ago        "error creating external conne…"   

Use docker service logsto view the logs of a service

[root@k8s-master ~]# docker service logs nginx
nginx.1.hhwngdta2dje@k8s-master    | 10.255.0.2 - - [14/Aug/2023:06:01:37 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36" "-"
nginx.1.hhwngdta2dje@k8s-master    | 2023/08/14 06:01:37 [error] 6#6: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.255.0.2, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "192.168.192.133", referrer: "http://192.168.192.133/"
nginx.1.hhwngdta2dje@k8s-master    | 10.255.0.2 - - [14/Aug/2023:06:01:37 +0000] "GET /favicon.ico HTTP/1.1" 404 571 "http://192.168.192.133/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36" "-"
[root@k8s-master ~]# 

Service scaling

We can docker service scalescale the number of containers a service runs using.

When business is at its peak, we need to scale the number of containers the service runs on.

$ docker service scale nginx=5

When business is stable, we need to reduce the number of containers the service runs.

$ docker service scale nginx=2

Delete service

Use docker service rmto remove a service from Swarmthe cluster.

$ docker service rm nginx

Use compose files

Just as you used to docker-compose.ymlconfigure and start multiple containers at once, you can also use files ( ) to configure and start multiple services Swarmin the cluster .composedocker-compose.yml

SwarmDeploying in a cluster is WordPressused as an example to illustrate.

version: "3"

services:
  wordpress:
    image: wordpress
    ports:
      - 80:80
    networks:
      - overlay
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
    deploy:
      mode: replicated
      replicas: 3

  db:
    image: mysql
    networks:
       - overlay
    volumes:
      - db-data:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
    deploy:
      placement:
        constraints: [node.role == manager]

  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - "8080:8080"
    stop_grace_period: 1m30s
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]

volumes:
  db-data:
networks:
  overlay:

Create this file in Swarmthe cluster management node. The service in it visualizerprovides a visual page. We can intuitively view the running nodes of each service in the cluster from the browser.

Deployment service

Used by the deployment service docker stack deploy, where -cthe parameter specifies the compose file name.

[root@k8s-master ~]# docker stack deploy -c docker-compose.yml wordpress
Creating network wordpress_overlay
Creating network wordpress_default
Creating service wordpress_wordpress
Creating service wordpress_db
Creating service wordpress_visualizer
[root@k8s-master ~]# docker service ls
ID                  NAME                   MODE                REPLICAS            IMAGE                             PORTS
i3pmffagn5kj        wordpress_db           replicated          0/1                 mysql:latest                      
arf56or4vgye        wordpress_visualizer   replicated          0/1                 dockersamples/visualizer:stable   *:8080->8080/tcp
ybl36pyvr14n        wordpress_wordpress    replicated          0/3                 wordpress:latest                  *:80->80/tcp

Delete service

[root@k8s-master ~]# docker stack rm stop wordpress
Nothing found in stack: stop
Removing service wordpress_mysql
Removing service wordpress_web
Removing network wordpress_my-network

Manage keys

On dynamic, large-scale distributed clusters, managing and distributing 密码sensitive 证书information such as Traditional key distribution methods (such as placing keys in images, setting environment variables, dynamically mounting volumes, etc.) have potential huge security risks.

Docker currently provides secretsmanagement functions. Users can securely manage sensitive data such as passwords and key certificates in Swarm clusters, and allow shared access to specified sensitive data among multiple Docker container instances.

Note: secretCan also be Docker Composeused in .

We can use docker secretcommands to manage sensitive information. Next, we introduce the use of this command in the Swarm cluster created in the above chapter.

mysqlHere we take deploying and services in the Swarm cluster wordpressas an example.

Create secret

We use docker secret createthe command to create it in the form of a pipe charactersecret

[root@k8s-master ~]# openssl rand -base64 20 | docker secret create mysql_password -
pq1yxd5ztpcpu5ygucnintlsw
[root@k8s-master ~]# openssl rand -base64 20 | docker secret create mysql_root_password -
o1cazb12n97hpeyv5yfeg40hr

View secret

Use docker secret lsthe command to viewsecret

[root@k8s-master ~]# docker secret ls
ID                          NAME                  DRIVER              CREATED             UPDATED
pq1yxd5ztpcpu5ygucnintlsw   mysql_password                            35 seconds ago      35 seconds ago
o1cazb12n97hpeyv5yfeg40hr   mysql_root_password                       28 seconds ago      28 seconds ago

Create MySQL service

$ docker network create -d overlay mysql_private

$ docker service create \
     --name mysql \
     --replicas 1 \
     --network mysql_private \
     --mount type=volume,source=mydata,destination=/var/lib/mysql \
     --secret source=mysql_root_password,target=mysql_root_password \
     --secret source=mysql_password,target=mysql_password \
     -e MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_password" \
     -e MYSQL_PASSWORD_FILE="/run/secrets/mysql_password" \
     -e MYSQL_USER="wordpress" \
     -e MYSQL_DATABASE="wordpress" \
     mysql:latest

If you do not targetexplicitly specify a path in , it will be mounted to the container's directory secretthrough the file system by default .tmpfs/run/secrets

$ docker service create \
     --name wordpress \
     --replicas 1 \
     --network mysql_private \
     --publish target=30000,port=80 \
     --mount type=volume,source=wpdata,destination=/var/www/html \
     --secret source=mysql_password,target=wp_db_password,mode=0444 \
     -e WORDPRESS_DB_USER="wordpress" \
     -e WORDPRESS_DB_PASSWORD_FILE="/run/secrets/wp_db_password" \
     -e WORDPRESS_DB_HOST="mysql:3306" \
     -e WORDPRESS_DB_NAME="wordpress" \
     wordpress:latest

Manage configuration information

On dynamic, large-scale distributed clusters, managing and distributing configuration files is also an important task. Traditional configuration file distribution methods (such as placing configuration files into images, setting environment variables, dynamic mounting of volumes, etc.) all reduce the versatility of images.

In Docker 17.06 and above, Docker has added docker configa new subcommand to manage configuration information in the cluster. In the future, you can configure services without putting the configuration file into the image or mounting it into the container.

Note: configCan only be used in Swarm clusters.

Here we take deploying services in a Swarm cluster redisas an example.

Create config

New redis.conffile

port 6380

This item configures the Redis listening 6380port

We use docker config createthe command to createconfig

[root@k8s-master ~]# docker config create redis.conf redis.conf
wjchg6thkm76p7rknrskkd751

View config

Use docker config lsthe command to viewconfig

[root@k8s-master ~]# docker config ls
ID                          NAME                CREATED             UPDATED
wjchg6thkm76p7rknrskkd751   redis.conf          27 seconds ago      27 seconds ago

Create redis service

$ docker service create \
     --name redis \
     # --config source=redis.conf,target=/etc/redis.conf \
     --config redis.conf \
     -p 6379:6380 \
     redis:latest \
     redis-server /redis.conf

If you do not targetexplicitly specify a path in , the default is to mount the container redis.confusing the file system .tmpfs/config.conf

After testing, redis can be used normally.

In the past, when we configured Redis by listening to the host directory, we needed to place this file on each node of the cluster. If we use to docker configmanage the configuration information of the service, we only need to create it on the management node in the cluster config. When deploying the service, the cluster will automatically The configuration file is distributed to each node running the service, which greatly reduces the difficulty of managing and distributing configuration information.

rolling upgrade

Now we want to NGINXupgrade the version to 1.13.12, so how to upgrade the service in Swarm mode?

You may think that if you stop the original service first and then deploy a service using a new image, wouldn't you complete the "upgrade" of the service?

The disadvantages of this are obvious. If there is a problem with the newly deployed service, it will be difficult to restore the original service after it is deleted. So how to perform a rolling upgrade of the service in Swarm mode?

The answer is to use docker service updatethe command

[root@k8s-master ~]# docker service update --image nginx:1.13.12-alpine nginx

[root@k8s-master ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                  PORTS
068en74dtckx        mysql               replicated          0/1                 mysql:5.7              
w352biyrmaw6        nginx               replicated          3/3                 nginx:1.13.12-alpine   *:80->80/tcp
v2d11uspjz68        redis               replicated          1/1                 redis:latest           *:6379->6380/tcp

Service rollback

Now suppose we find that there are some problems nginxwhen the service image is upgraded to . We can use the command to roll back with one click.nginx:1.13.12-alpine

[root@k8s-master ~]# docker service rollback nginx
nginx

[root@k8s-master ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                 PORTS
068en74dtckx        mysql               replicated          0/1                 mysql:5.7             
w352biyrmaw6        nginx               replicated          2/3                 nginx:1.13.7-alpine   *:80->80/tcp
v2d11uspjz68        redis               replicated          1/1                 redis:latest          *:6379->6380/tcp

Guess you like

Origin blog.csdn.net/qq_51495235/article/details/132276489