Containerization technology Docker from entry to actual combat

Table of contents

Understanding of docker

what can be done

traditional virtual machine

container virtualization technology

How Docker differs from traditional virtualization

Docker installation

Pre-environment

Vagrant virtual machine environment

​Edit Use Vgrant to install a virtual machine 

Virtual machine network configuration 

 Docker architecture design

mirror image  

container  

warehouse

Summarize  

Alibaba Cloud mirror service configuration

Common commands for Docker

mirror command

 docker images

 docker search

Docker pull

Docker rmi 

 Common Commands for Containers

download centos7

Create and start the container

List running containers 

exit container command

 Start the container

Restart the container 

stop container

delete container

other commands

View container details

 into the running container

file copy

Docker image file 

what is mirror image

UnionFS

Image loading principle

layered mirroring

Features of Hierarchical Structure

Mirror Features

mirroring operations

Create container and start

 modify container

create mirror

Start a new mirror 

​Edit data volume 

concept

What is a data volume

what problem was solved

data volume usage

Add data volume through dockerfile 

data volume container

DockerFile

build process 

Implementation process

​Edit custom image 

Construct

 run

 mirror history

 Customize Tomcat

MySql installation

Redis installation 

Docker network

Custom network

docker combat 

Build a MySql database cluster

HaProxy load balancing

SpringBoot project deployment 

DockerCompose 

introduce

Compose installation

Compose one-click deployment in practice 

Compose deploys Springboot project 

Compose common operations 

Harbor private server

Introduction

Features

Install 

Login and mirror pull 

 swarm

management node 

work node

Swarm cluster construction

Build a cluster environment

Tomcat Service Orchestration 

 WordPress in action


Understanding of docker

        Docker is a cloud open source project based on the Go language. The main goal of Docker is to Build, Ship and Run Any App, Anywhere, which is to make the user's APP (which can be a WEB application or database application, etc.) ) and its operating environment can be packaged once and run everywhere.

        The emergence of Linux container technology solves such a problem, and Docker is developed on the basis of it. Run the application on the Docker container, and the Docker container is consistent on any operating system, which realizes cross-platform and cross-server. You only need to configure the environment once, and you can deploy it on another machine with one click, which greatly simplifies the operation.

what can be done

traditional virtual machine

        Virtual machine (virtual machine) is a solution with environment installation.

        It can run another operating system in one operating system, such as running a Linux system in a Windows system. The application is not aware of this, because the virtual machine looks exactly the same as the real system, but for the underlying system, the virtual machine is just an ordinary file, which can be deleted when it is not needed, and has no effect on other parts. This type of virtual machine perfectly runs another system, which can keep the logic between the application program, operating system and hardware unchanged.

       Disadvantages of traditional virtual machines:

        1. High resource usage

        2. Many redundant steps

        3. Slow start

container virtualization technology

        Due to these shortcomings of the previous virtual machine, Linux has developed another virtualization technology: Linux Containers (Linux Containers, abbreviated as LXC).

        Instead of simulating a complete operating system, Linux containers isolate processes. With containers, it is possible to package all the resources needed for software to run into an isolated container. Unlike a virtual machine, a container does not need to be bundled with a complete operating system, but only the library resources and settings required for the software to work. The system thus becomes efficient and lightweight and ensures that the software deployed in any environment can run consistently.

How Docker differs from traditional virtualization

        1. The traditional virtual machine technology is to virtualize a set of hardware, run a complete operating system on it, and then run the required application process on the system;

        2. The application process in the container runs directly on the host's kernel, the container does not have its own kernel, and there is no hardware virtualization. Therefore, containers are more portable than traditional virtual machines.

       3. Each container is isolated from each other, and each container has its own file system. The processes between containers will not affect each other, and computing resources can be distinguished.

        Advantages of DOCKER:

                1. Build once, run anywhere

                2. Faster application delivery and deployment

                3. More convenient upgrade and expansion

                4. Easier system operation and maintenance

                5. More efficient utilization of computing resources

Docker installation

        Official website: http://www.docker.com

        Warehouse: Docker

Pre-environment

Docker supports the following CentOS versions:

        CentOS 7 (64-bit) 8

        CentOS 6.5 (64-bit) or higher

        Prerequisites Currently, only the kernel in the CentOS distribution supports Docker.

        Docker runs on CentOS 7, requiring a 64-bit system and a system kernel version of 3.10 or higher.

        Docker runs on CentOS-6.5 or higher version, requires the system to be 64-bit, and the system kernel version is 2.6.32-431 or higher.

        View your own kernel The uname command is used to print current system-related information (kernel version number, hardware architecture, host name and operating system type, etc.).

Vagrant virtual machine environment

        Here through VirtualBox combined with Vagrant to install the virtual machine

        VirtualBox official website: Oracle VM VirtualBox

        Vagrant official website: https://www.vagrantup.com/

        Vagrant Registry: Discover Vagrant Boxes - Vagrant Cloud

        Install VirtualBox and Vagrant, fool-proof installation. After the installation is complete, the computer needs to be restarted. Enter the vagrant command in the cmd command window to pop up the following content, indicating that the vagrant installation is successful

VirtualBox download:

Vagrant download:

VirtualBox:

       Brainless installation, just click Next, PS: The installation location needs to be selected by yourself

Vagrant:

       Brainless installation, just click Next, PS: The installation location needs to be selected by yourself

Enter vagrant in the black window, and the following display indicates that the installation is successful:

Install a virtual machine with Vgrant 

1. Enter the command (the Vagrantfile file will be generated in the directory after the input is complete):

       vagrant init centos/7

2. Enter the command (automatically download and install the image file in the current directory after execution):

       vagrant up

 3. If an error occurs at this time


4. Need to modify the vagrant file, add : config.vm.box_download_insecure=true

5. Configure the default virtual location under the path without Chinese

6. The installation is successful when virtualBox displays the following

7. Through the command: vagrant ssh command to connect to the virtual machine, the vagrant account is used at this time

8. If you need to switch accounts, use the command: sudo -i to switch to the root user

Virtual machine network configuration 

1. View the network segment of the local IP.

2. Open the vagrantfile file and modify the IP address inside to match the network segment allocated by the computer to the virtual machine. 

 

3. Right-click on Oracle VM VirtualBox and click Normal Shutdown to shut down the virtual machine.

4. Use xhsell to connect to the virtual machine, and connect through the key.

1. Enter the official website: Install Docker Engine on CentOS | Docker Documentation

 

2. Paste the command into xshell:

sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

3. The following prompt appears, proving that docker has not been installed before

4. Execute the following command to install dependencies

sudo yum install -y yum-utils

 5. Configure the mirror address:

sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

6. Execute the command to install docker

sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin

 7. Start docker

sudo systemctl start docker

8. View docker version

docker version

9. Set docker to start automatically at boot

systemctl enable docker

 Docker architecture design

mirror image  

        A Docker image (Image) is a read-only template. Images can be used to create Docker containers, and one image can create many containers.

container  

        Docker uses a container (Container) to run an application or a group of applications independently. A container is a running instance created from an image. It can be started, started, stopped, deleted. Each container is an isolated and secure platform. The container can be regarded as a simple version of the Linux environment (including root user authority, process space, user space and network space, etc.) and the applications running in it. The definition of a container is almost the same as that of an image, and it is also a unified perspective of a stack of layers. The only difference is that the top layer of the container is readable and writable.

warehouse

        The warehouse (Repository) is a place where image files are stored centrally. There is a difference between a warehouse (Repository) and a warehouse registration server (Registry). There are often multiple warehouses stored on the warehouse registration server, and each warehouse contains multiple images, and each image has a different tag (tag). Warehouses are divided into public warehouses (Public) and private warehouses (Private). The largest public repository is Docker Hub ( https://hub.docker.com/ ), which stores a large number of images for users to download. Domestic public warehouses include Alibaba Cloud, NetEase Cloud, etc.

Warehouse: access address: Docker

Summarize  

        The container instance generated by the image file is itself a file called an image file. A container runs a service. When we need it, we can create a corresponding running instance through the docker client, that is, as for our container storage, it is a place where a bunch of images are placed. We can publish the images to the storage In the middle, it can be pulled from the warehouse when needed.

Alibaba Cloud mirror service configuration

        The warehouse accessed by default is abroad, so the access speed cannot be guaranteed. For a better experience, we can configure Alibaba Cloud's image acceleration

Open Cloud Native Applications-Cloud Native-Introduction to Cloud Native- Alibaba Cloud Documentation Center

        https://cr.console.aliyun.com/cn-zhangjiakou/instances/mirrors

Execute the commands in sequence:

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://cdy3fxsh.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

Common commands for Docker

mirror command

镜像命令 							说明 
docker images 					列出本地主机上的镜像 
docker search 镜像名称 			从 docker hub 上搜索镜像 
docker pull 镜像名称 			从docker hub 上下载镜像 
docker rmi 镜像名称 				删除本地镜像

 docker images

docker images -a
docker images -q
docker images --digests
docker images --no-trunc

 docker search

        docker hub is Docker's online warehouse, we can search for the image we need through docker search

docker search --no-trunc
docker search -limit 5 tomcat
docker search -f STARS=5 tomcat

Docker pull

从Docker hub 上下载镜像文件,执行命令:
docker pull tomcat
docker pull redis

Docker rmi 

#删除一个 rmi后面跟上IMAGEID
docker rmi -f 7614ae9453d1
#删除多个 rmi后面跟上 软件名称1:tag1 软件名称2:tag2
docker rmi -f redis:latest tomcat:latest
#删除所有的镜像ID docker images -qa 会获取所有的镜像ID
docker rmi -f $(docker images -qa)

 Common Commands for Containers

download centos7

Excuting an order:

docker pull centos:centos7

Create and start the container

docker run [OPTIONS] IMAGE [COMMAND]

interactive container

	docker run -it centos:centos7 /bin/bash

 1. Create a container:

	docker run -it centos:centos7 /bin/bash

2. Switch the window, relink the virtual machine, execute the command, and view the running docker container

	docker ps

3. Create the file in the running docker container

 

4. Check if you can see the created file

5. Prove that there are two sets of environments

6. View all containers and status through commands

docker ps -a

 

List running containers 

We want to see which containers are currently running, we can use the ps command to view

docker ps [OPTIONS]

#显示所有的容器 包含正在运行和停的
docker ps -a
#显示最近创建的容器
docker ps -l
#显示最近创建的几个容器 数字是几就是显示几个
docker ps -n 2
#只显示容器编号
docker ps -q
#显示容器的所有的信息 不截断输出
docker ps --no-turnc

exit container command

#容器停止且退出
exit
#容器不停止退出
ctrl+p+q

 Start the container

docker start container ID or container name

#显示出当前所有的容器信息
docker ps -a
#根据显示的容器ID启动容器
docker start e225ba310ead

Restart the container 

docker restart container id or name

#显示出当前所有的容器信息
docker ps -a
#重启容器
docker restart e225ba310ead

stop container

docker stop container ID or name

It can also be handled by force stop

docker kill container ID or name

#显示出当前所有的容器信息
docker ps -a
#停止容器
docker stop e225ba310ead
#查看容器是否运行
docker ps
#强制停止容器
docker kill e225ba310ead
#查看容器是否运行
docker ps

delete container

Sometimes the container is useless after it is used. We want to delete the container. At this time, we can use the rm command:

docker rm container ID

docker rm -f $(docker ps -qa)

docker ps -a -q | xargs docker rm

#显示出当前所有的容器信息
docker ps -a
#按照容器id删除容器
docker rm e225ba310ead
#显示出当前所有的容器信息
docker ps -a
#删除所有的容器
docker rm $(docker ps -qa)
#显示出当前所有的容器信息
docker ps -a
#下载helloworld镜像
docker run hello-world
#删除所有的容器
docker ps -aq | xargs docker rm

other commands

docker run -d container name

We can see that the container just started has exited through docker ps -a

In order to keep the daemon container running all the time, we can run a looping script in the background after starting the container

docker run -d centos /bin/bash -c 'while true;do echo hello bobo;sleep 2;done'

View the log of our operation

docker logs -t -f --tail 3 container ID

docker logs -t -f --tail 3 04f46687e241

View the processes running in the container

docker top container ID

View container details

We want to view the details of the container through the inspect command

docker inspect container ID

docker inspect 04f46687e241

 into the running container

copy is to copy add is to add and decompress

Execute the command inside the container to view the files inside the container:

docker exec -it 04f46687e241 ls

 Enter the container and print out the logs executed in the container

docker attach 04f46687e241

into the container

docker exec -it b73b /bin/bash

file copy

We sometimes need to copy content from the container to the host

       docker cp container ID: container path destination path

docker cp d798b92d6cae:/root/hello.txt /root

Docker image file 

what is mirror image

        First of all, let's take a look at what exactly is a mirror image? Although mirroring and containers have been introduced before, they are not particularly in-depth.

        A mirror image is a lightweight, executable independent software package used to package the software operating environment and software developed based on the operating environment. It contains everything needed to run a certain software, including code, runtime, library, environment variables and configuration files.

UnionFS

        UnionFS (Union File System): Union File System (UnionFS) is a layered, lightweight and high-performance file system that supports modifications to the file system as a single submission to superimpose layer by layer, while different The directories are mounted into the same virtual file system (unite several directories into a single virtual filesystem). The Union filesystem is the basis of Docker images. Images can be inherited through layers. Based on the base image (without a parent image), various specific application images can be made.

        Features: Load multiple file systems at the same time, but from the outside, only one file system can be seen. Joint loading will superimpose the file systems of each layer, so that the final file system will contain all underlying files and directories

Image loading principle

        Docker image loading principle: Docker images are actually composed of layer-by-layer file systems, such as UnionFS.

        Bootfs (boot file system) mainly includes Bootloader and Kernel. Bootloader mainly boots and loads Kernel. When Linux starts, it will load the Bootfs file system. The bottom layer of the Docker image is bootfs. This layer is the same as our typical Linux/Unix system, including the Boot loader and kernel. When the boot loading is complete, the entire kernel is in the memory. At this time, the right to use the memory has been transferred from the bootfs to the kernel. At this time, the system will also uninstall the bootfs.

        Rootfs (root file system), on top of Bootfs. It contains standard directories and files such as /dev, /proc, /bin, /etc in a typical Linux system. Rootfs is a variety of different operating system distributions, such as Ubuntu, Centos and so on.

layered mirroring

        In fact, when we were pulling files, such as Tomcat, we can see that the downloaded files are layered on the pull interface.

Features of Hierarchical Structure

        In fact, we will also consider why docker uses this layered result, and what are its benefits? One of the biggest benefits is sharing resources. For example, if multiple images are built from the same base image, then the host only needs to save a base image on the disk, and only need to load a base image in the memory. Ready to serve all containers. And every layer of the image can be shared.

Mirror Features

        Everyone should note that Docker images are read-only. When the container starts, a new writable layer is loaded on top of the image. This layer is usually called the container layer, and everything below the container layer is called the image layer.

mirroring operations

Download tomcat image

docker pull tomcat

Create container and start

Map port 8080 in the container to port 8888 of the local machine to start.

docker run -it -p 8888:8080 tomcat

 

The service can be accessed, but an error 404 is reported, indicating that there is no problem with the service startup.

 

 modify container

If there is no resource accessed in the container, create one yourself.

Enter the tomcat directory:

docker exec -it 容器ID /bin/bash

Create a file:

access:

create mirror

Our current container is different from the downloaded one, and we can create a new image based on this

docker commit -a='bobo' -m='add index.html' 容器ID bobo/tomcat:1.666

Start a new mirror 

Now we can create and start the container through our newly created image file

docker run -it -p 8888:8080 bobo/tomcat:1.6666
或者
docker run -it -p 8888:8080 1b5ba6ab2677

 data volume 

concept

        Earlier we introduced images and containers. Through images, we can start multiple containers, but we found that when our containers stopped getting deleted, some data of our applications in the containers were also lost. At this time, in order to solve the problem of persistent data in containers , we need to solve this problem through container data volume

What is a data volume

        If the data generated by the Docker container does not generate a new image through docker commit, so that the data is saved as part of the image, then when the container is deleted, the data will naturally disappear. To save data in docker we use volumes. In simple terms, container volumes are equivalent to RDB and AOF in the persistence mode in Redis.

what problem was solved

        A volume is a directory or a file that exists in one or more containers and is mounted to the container by docker, but does not belong to the joint file system, so it can bypass the Union File System to provide some features for persistent storage or shared data: Volume's The purpose of the design is data persistence, which is completely independent of the life cycle of the container, so Docker will not delete its mounted data volume when the container is deleted

Features:

        1. Data volumes can share or reuse data between containers

        2. Changes in the volume can take effect directly

        3. Changes in the data volume will not be included in the update of the mirror

        4. The life cycle of the data volume lasts until no container uses it

Persistence, inheritance and sharing data between containers

data volume usage

Run a centos container and mount the local path to the path in the host .

docker run -it -v /宿主机绝对路径:/容器内目录 镜像名

docker run -it -v /root/dockerfile1:/root/docker1 centos

 View container details 

docker inspect 容器ID

access control

 Only writable in the host, read-only in the container

docker run -it -v /root/dockefA:/root/dockerfB:ro centos

Read-only in the container:

 Readable and writable in the host machine:

Add data volume through dockerfile 

Create a mydocker in the directory of the host machine, and create a file in the directory, the content is as follows:

# volume test
FROM centos
VOLUME ["/dataVolumeContainer1","/dataVolumeContainer2"]
CMD echo "finished,--------success1"
CMD /bin/bash

Build a docker image from the created file

docker build -f docke1 -t lc/centos .

The path mounted on the host can be viewed through the command:

       docker inspect container id

data volume container

Realize data sharing between host and container

        The named container mounts the data volume, and other containers realize data sharing by mounting this container. The container that mounts the data is called a data volume container.

1. Start the parent container

docker run -it --name dc01 lc/centos

2. Start child container 1 and mount the parent container

docker run -it --name dc02 --volumes-from dc01 lc/centos
  1. Start child container 2, mount the parent container
docker run -it --name dc03 --volumes-from dc01 lc/centos
  1. View the startup effect

Create the file a.txt in dataVolumeContainer1 of the parent container

 

  1. In the dataVolumeContainer1 of the subcontainer, you can see the a.txt file

No matter the file is modified in container 1, container 2, parent container, or host machine, the modified content can be seen everywhere.

DockerFile

        DockerFile is a build file used to build a Docker image, which is a script composed of a series of commands and parameters

build process 

The instructions in the Dockerfile need to meet the following rules

Implementation process

The process of docker executing a Dockerfile script is roughly as follows:

        1. docker runs a container from a base image

        2. Execute an instruction and make changes to the container

        3. Perform an operation similar to docker commit to submit a new image layer

        4. Docker runs a new container based on the image just submitted

        5. Execute the next instruction in the dockerfile until all instructions are executed

        From the perspective of application software, Dockerfile, Docker image and Docker container represent three different stages of software,

        Dockerfile is the raw material of software

        Docker images are software deliverables

        The Docker container can be considered as the running state of the software.

        The Dockerfile is development-oriented, the Docker image becomes the delivery standard, and the Docker container involves deployment and operation and maintenance. The three are indispensable, and they work together to serve as the cornerstone of the Docker system.

        1. Dockerfile, you need to define a Dockerfile, which defines everything the process needs. The content involved in Dockerfile includes executing code or files, environment variables, dependent packages, runtime environment, dynamic link library, operating system distribution, service process and kernel process (when the application process needs to deal with system services and kernel processes, this Need to consider how to design namespace permission control) and so on;

        2. Docker image, after defining a file with Dockerfile, a Docker image will be generated during docker build, and when the Docker image is run, it will actually start to provide services;

        3. Docker container, the container directly provides services.

 custom mirror 

Write a dockerfile

#以哪个镜像文件为基础,从哪继承而来
FROM centos:centos7

#作者的邮箱
MAINTAINER luocong<[email protected]>

#设置环境变量
ENV MYPATH /usr/local

#终端链接后默认进入的目录
WORKDIR $MYPATH

#添加指令支持
RUN yum -y install vim

#对外暴露端口为80
EXPOSE 80

#输出执行信息
CMD echo $MYPATH
CMD echo "success --->"
CMD /bin/bash

Construct

Then build the script into a corresponding image file.

docker build -f dockerfile name -t newly created image name: TAG .

docker build -f docker1 -t mycentos .

 run

docker run -it 镜像名称
docker run -it mycentos

 mirror history

docker history 镜像名称
docker history mycentos

 Customize Tomcat

Create a tomcat directory

add a file

Create a hello.txt file in the current directory, the function is to COPY into the container

copy related software

Prepare the corresponding jdk and tomcat compressed files.

Create Dockerfile

Create the corresponding Dockerfile as follows:

FROM centos
MAINTAINER luocong<[email protected]>
#把宿主机当前上下文的hello.txt拷贝到容器/usr/local/路径下
COPY hello.txt /usr/local/helloincontainer.txt
#把java与tomcat添加到容器中
ADD jdk-8u351-linux-x64_(1).tar.gz /usr/local/
ADD apache-tomcat-9.0.70.tar.gz /usr/local/
#安装vim编辑器
RUN yum -y install vim
#设置工作访问时候的WORKDIR路径,登录落脚点
ENV MYPATH /usr/local
WORKDIR $MYPATH
#配置java与tomcat环境变量
ENV JAVA_HOME /usr/local/jdk1.8.0_351
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.70
ENV CATALINA_BASE /usr/local/apache-tomcat-9.0.70
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin
#容器运行时监听的端口
EXPOSE 8080
#启动时运行tomcat
# ENTRYPOINT ["/usr/local/apache-tomcat-9.0.70/bin/startup.sh" ]
# CMD ["/usr/local/apache-tomcat-9.0.70/bin/catalina.sh","run"]
CMD /usr/local/apache-tomcat-9.0.70/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.70/bin/logs/catalina.out

Construct

docker build -f docker3 -t mytomcat .

 run

docker run -it -p 9080:8080 --name mytomcat -v /root/dockerfile/tomcat/test:/usr/local/apache-tomcat-9.0.70/webapps/test -v /root/dockerfile/tomcat/tomcatlogs/:/usr/local/apache-tomcat-9.0.70/logs -- privileged=true mytomcat

 access test

Create a web.xml file in the corresponding directory mounted on the host

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://java.sun.com/xml/ns/javaee"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
id="WebApp_ID" version="2.5">
<display-name>test</display-name>
</web-app>

 create jsp file

<%@ page language="java" contentType="text/html; charset=UTF-8"
pageEncoding="UTF-8"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Insert title here</title>
</head>
<body>
-----------welcome------------
<%="i am in docker tomcat self "%>
<br>
<br>
<% System.out.println("=============docker tomcat self");%>
</body>
</html>

MySql installation

Pull the mysql file

docker pull mysql:5.6

Run the container:

docker run -p 12345:3306 --name mysql -v /root/mysql/conf:/etc/mysql/conf.d -v /root/mysql/logs:/logs -v /root/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.6

 Link to the mysql database through tools

Redis installation 

Pull files:

docker pull redis:4.0

Start redis:

docker run -p 6379:6379 -v /root/myredis/data:/data -v /root/myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf -d redis:4.0 redis-server /usr/local/etc/redis/redis.conf --appendonly yes

 Enter the running redis container:

docker exec -it f2b70fe249d0 redis-cli

Docker network

        Docker is a custom container format encapsulated based on Linux Kernel's namespace, CGroups, UnionFileSystem and other technologies, thus providing a virtual operating environment.

        namespace: used for isolation, such as pid [process], net [network], mnt [mount point]

        CGroups: Controller Groups are used to limit resources, such as Unions such as memory and CPU

        File Systems: used for Image and Container layering

Custom network

Create a network of type Bridge

docker network create tomcat-net 
或者 
docker network create tomcat-net --subnet=172.18.0.0/24 tomcat-net

View existing NetWork:

docker network ls

View tomcat-net details:

docker network inspect tomcat-net

 delete network:

docker network rm tomcat-net

Create a tomcat container and specify tomcat-net

docker run -d --name custom-net-tomcat --network tomcat-net tomcat-ip:1.0

Check the network information of custom-net-tomcat: key information is intercepted

 

View network card interface information

brctl show

 

At this point, ping some tomcat01 in the custom-net-tomcat container and find that the ping fails

docker exec -it custom-net-tomcat ping 172.17.0.2

 At this point, if the tomcat01 container can connect to tomcat-net, it should be fine.

docker network connect tomcat-net tomcat01

docker combat 

Build a MySql database cluster

1. Pull the mysql image

docker pull percona/percona-xtradb-cluster:5.7.21

2. Copy the pxc image [rename] 

docker tag percona/percona-xtradb-cluster:5.7.21 mysqlcluster

3. Delete the original image

docker rmi percona/percona-xtradb-cluster:5.7.21

 4. Create a separate network segment for the MySQL database cluster

docker network create --subnet=172.30.0.0/24 mysqlcluster

5.  Create and delete volume does not execute

docker volume create --name v1 # 创建 volume
docker volume rm v1 # 删除volume
docker volume inspect v1 # 查看详情

 6. Build a pxc cluster and prepare three data volumes

docker volume create --name mysql1
docker volume create --name mysql2
docker volume create --name mysql3

7.  Run 3 PXC containers

[CLUSTER_NAME PXC cluster name]

[XTRABACKUP_PASSWORD The password needed for database synchronization]

8.  Create the first node

docker run -d -p 3301:3306 -v mysql1:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=MSQLCLUSTER -e EXTRABACKUP_PASSWORD=123456 --privileged --name=nodeI --net=mysqlcluster --ip=172.30.0.2 mysqlcluster

 9. Create the second and third nodes: note -e CLUSTER_JOIN=node1

docker run -d -p 3302:3306 -v mysql2:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=MSQLCLUSTER -e EXTRABACKUP_PASSWORD=123456 -e CLUSTER_JOIN=nodeI --privileged --name=nodeII --net=mysqlcluster --ip=172.30.0.3 mysqlcluster
docker run -d -p 3303:3306 -v mysql3:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=MSQLCLUSTER -e EXTRABACKUP_PASSWORD=123456 -e CLUSTER_JOIN=nodeI --privileged --name=nodeIII --net=mysqlcluster --ip=172.30.0.4 mysqlcluster

10.  The test connection is successful

HaProxy load balancing

 Pull the image:

docker pull haproxy

 

Enter the tmp directory:

	创建文件夹haproxy
	进入文件夹,创建文件haproxy.cfg
cd /tmp
mkdir haproxy
cd haproxy
touch haproxy.cfg
	vi haproxy.cfg

 Add the following:

global
#工作目录,这边要和创建容器指定的目录对应
# chroot /usr/local/etc/haproxy
#日志文件
log 127.0.0.1 local5 info
#守护进程运行
daemon
defaults
log global
mode http
#日志格式
option httplog
#日志中不记录负载均衡的心跳检测记录
option dontlognull
#连接超时(毫秒)
timeout connect 5000
#客户端超时(毫秒)
timeout client 50000
#服务器超时(毫秒)
timeout server 50000
#监控界面
listen admin_stats
#监控界面的访问的IP和端口
bind 0.0.0.0:8888
#访问协议
mode http
#URI相对地址
stats uri /dbs_monitor
#统计报告格式
stats realm Global\ statistics
#登陆帐户信息
stats auth admin:admin
#数据库负载均衡
listen proxy-mysql
#访问的IP和端口,haproxy开发的端口为3306
#假如有人访问haproxy的3306端口,则将请求转发给下面的数据库实例
bind 0.0.0.0:3306
#网络协议
mode tcp
#负载均衡算法(轮询算法)
#轮询算法:roundrobin
#权重算法:static-rr
#最少连接算法:leastconn
#请求源IP算法:source
balance roundrobin
#日志格式
option tcplog
#在MySQL中创建一个没有权限的haproxy用户,密码为空。
#Haproxy使用这个账户对MySQL数据库心跳检测
option mysql-check user haproxy
server MySQL_1 172.30.0.4:3306 check weight 1 maxconn 2000
server MySQL_2 172.30.0.4:3306 check weight 1 maxconn 2000
server MySQL_3 172.30.0.4:3306 check weight 1 maxconn 2000
#使用keepalive检测死链
option tcpka

Create a haproxy container

docker run -d -p 8888:8888 -p 3306:3306 -v /tmp/haproxy:/usr/local/etc/haproxy --name haproxy01 --privileged --net=mysqlcluster haproxy

Create a user on the MySQL database for heartbeat detection

CREATE USER 'haproxy'@'%' IDENTIFIED BY '';

Browser access:

       http://192.168.56.10:8888/dbs_monitor 

SpringBoot project deployment 

Project Architecture:

Create a network for the springboot project:

docker network create --subnet=172.24.0.0/24 sbm-net

 Create a mysql database table:

create database test;
use test;
CREATE TABLE
    t_user
    (
        uid INT NOT NULL AUTO_INCREMENT,
        uname VARCHAR(20),
        PRIMARY KEY (uid)
    )
    ENGINE=InnoDB DEFAULT CHARSET=utf8 DEFAULT COLLATE=utf8_general_ci;

Create ssm project:

        Refer to the Vue+Axios+SSM framework to achieve addition, deletion, modification, query and file upload_only for code drunk blog-CSDN blog

Package the created project into a jar package and upload it to the server

        cd /tmp

        mkdir springboot

        cd springboot/

        yum install -y lrzsz

Enter sz to upload the jar package        

Create DockerFile file

       vi Dockerfile

Add the following:

FROM openjdk:8
MAINTAINER bobo
LABEL name="springboot-mybatis" version="1.0" author="bobo"
COPY ssm-0.0.1-SNAPSHOT.jar ssm-SNAPSHOT.jar
CMD ["java","-jar","ssm-SNAPSHOT.jar"]

 build dockerfile

docker build -t sbm-image .

 Start the image of the jar package:

docker run -d --name sb01 -p 8081:8080 --net=sbm-net --ip=172.24.0.11 sbm-image

Browser access:

       http://192.168.56.20:8081/getList

Download nginx mirror

docker pull nginx

 Enter the tmp directory:

	cd /tmp
	mkdir nginx
	vi nginx.conf

Add the following:

user nginx;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
location / {
proxy_pass http://balance;
}
}
upstream balance{
server 172.24.0.11:8080;
server 172.24.0.12:8080;
server 172.24.0.13:8080;
}
include /etc/nginx/conf.d/*.conf;
}

Start nginx:

docker run -d --name my-nginx -p 80:80 -v /tmp/nginx/nginx.conf:/etc/nginx/nginx.conf --network=sbm-net --ip 172.24.0.20 nginx

Browser access:

       http://192.168.56.20/getList

DockerCompose 

introduce

        Compose is a tool for defining and running multi-container Docker applications. With Compose, you can use YML files to configure all the services your application needs. Then, with a single command, all services can be created and started from the YML file configuration.

Start all services with one click

        Steps to use Docker Compose

        Create the corresponding DockerFile file

        Create a yml file and arrange our services in the yml file

        Run our container with one click through the docker-compose up command

Compose installation

Official website address: Overview | Docker Documentation

Excuting an order:

curl -L "https://github.com/docker/compose/releases/download/1.29.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

 Modify folder permissions:

chmod +x /usr/local/bin/docker-compose

Create a soft link:

ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Verify that the installation was successful:

docker-compose --version

Compose one-click deployment in practice 

Deploy the blog project

create folder

mkdir my-wordpress
cd my-wordpress/
vi docker-compose.yml

Add the following:

services:
  db:
    # We use a mariadb image which supports both amd64 & arm64 architecture
    #image: mariadb:10.6.4-focal
    # If you really want to use MySQL, uncomment the following line
    image: mysql:8.0.27
    command: '--default-authentication-plugin=mysql_native_password'
    volumes:
      - db_data:/var/lib/mysql
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=somewordpress
      - MYSQL_DATABASE=wordpress
      - MYSQL_USER=wordpress
      - MYSQL_PASSWORD=wordpress
    expose:
      - 3306
      - 33060
  wordpress:
    image: wordpress:latest
    volumes:
      - wp_data:/var/www/html
    ports:
      - 80:80
    restart: always
    environment:
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_USER=wordpress
      - WORDPRESS_DB_PASSWORD=wordpress
      - WORDPRESS_DB_NAME=wordpress
volumes:
  db_data:
  wp_data:

Start with the up command

docker-compose up -d

Browser access:

http://192.168.56.30/wp-admin/install.php

Compose deploys Springboot project 

1. Create a springboot project, rely on web and redis

2. Add the following content to the application.properties file:

server.port=8080
spring.redis.host=redis
spring.redis.port=6379

 3. Write the controller code:

@RestController
public class MyController {

    @Autowired
    StringRedisTemplate stringRedisTemplate;

    @RequestMapping("hello")
    public String hello(){
        Long counter = stringRedisTemplate.opsForValue().increment("counter");
        return "页面访问次数"+counter;
    }
}

 4. Write dockerfile

FROM java:8
COPY springcount-0.0.1-SNAPSHOT.jar app.jar
CMD ["--server.port=8080"]

EXPOSE 8086
ENTRYPOINT ["java","-jar","app.jar"]

5. Write compose file

version: '3.9'
services:
  myapp:
    build: .
    image: myapp
    depends_on:
      - redis
    ports:
    - "8080:8080"
  redis:
    image: "library/redis:alpine"vivi 


6. Package the project as a jar

7. Create a directory to upload jar packages, dockerfile files and yml files

8. Run compose and execute the command:

docker-compose up

 9.  Access address: http://192.168.56.30:8080/hello

 

Compose common operations 

(1)查看版本
	docker-compose version
(2)根据yml创建service
	docker-compose up
指定yaml:
	docker-compose up -f xxx.yaml
后台运行:
	docker-compose up -d
(3)查看启动成功的service
	docker-compose ps
	也可以使用docker ps
(4)查看images
	docker-compose images
(5)停止/启动service
	docker-compose stop/start
(6)删除service[同时会删除掉network和volume]
	docker-compose down
(7)进入到某个service
	docker-compose exec redis sh

Harbor private server

Introduction

        The development and operation of Docker container applications are inseparable from reliable image management. Although Docker officially provides a public image warehouse, it is also necessary to deploy the Registry in our private environment in terms of security and efficiency. Harbor is an enterprise-level Docker Registry management project open sourced by VMware, which includes functions such as authority management (RBAC), LDAP, log audit, management interface, self-registration, mirror replication, and Chinese support.

Features

Install 

        Official website installation tutorial: https://goharbor.io/docs/2.3.0/install-config/

        First you need to download the installation file for: Releases · goharbor/harbor · GitHub

        Because it is relatively large, downloading from the official website is very slow, so it will be provided directly to everyone nearby, using the latest version 2.3.3

1. Enter the directory, create a file, upload the file
    

cd /usr/local/


2. Unzip the file
    

tar -zxvf harbor-offline-installer-v2.3.3.tgz


3. Rename the yml

cd harbor
mv harbor.yml.tmpl harbor.yml


4. Configure https access to
    generate CA certificate private key
    

openssl genrsa -out ca.key 4096


    Configure domain name

openssl req -x509 -new -nodes -sha512 -days 3650 \
-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=192.168.56.30" \
-key ca.key \
-out ca.crt

5.  Generate a private key

openssl genrsa -out 192.168.56.30.key 4096

6. Generate a Certificate Signing Request (CSR)

openssl req -sha512 -new \
-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=192.168.56.30" \
-key 192.168.56.30.key \
-out 192.168.56.30.csr

7. Generate an x509 v3 extension file

        Regardless of whether you connect to Harbor hosts using a FQDN or an IP address, this file must be created so that a certificate extension that complies with the Subject Alternative Name (SAN) and x509 v3 requirements can be generated for your Harbor host. Replace the DNS entry to reflect your domain

cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment,
dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = IP:192.168.56.30
EOF

 8. Use the v3.ext file to generate a certificate for your Harbor host

vi v3.ext

Replace content:

authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment,dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = IP:192.168.56.40
openssl x509 -req -sha512 -days 3650 \
-extfile v3.ext \
-CA ca.crt -CAkey ca.key -CAcreateserial \
-in 192.168.56.30.csr \
-out 192.168.56.30.crt

9. Provide certificates to Harbor and Docker After generating the ca.crt, harbor.od.com.crt and harbor.od.com.key files, they must be provided to Harbor and docker, and reconfigure them to copy the server certificate and key to the /data/cert/ folder on the Harbor host

mkdir -p /data/cert
cp 192.168.56.30.crt /data/cert/
cp 192.168.56.30.key /data//cert/

10. Convert harbor.od.com.crt to harbor.od.com.cert for use by Docker

openssl x509 -inform PEM -in 192.168.56.30.crt -out 192.168.56.30.cert

11. Copy the server certificate, key, and CA files to the Docker certificates folder on the Harbor host. You must first create the appropriate folder, execute the command:

mkdir -p /etc/docker/certs.d/192.168.56.30/
cp 192.168.56.30.cert /etc/docker/certs.d/192.168.56.30/
cp 192.168.56.30.key /etc/docker/certs.d/192.168.56.30/
cp ca.crt /etc/docker/certs.d/192.168.56.30/

12.  Restart the docker container

systemctl restart docker

13.  Configure the harbor service and modify the harbor.yml file

cd /usr/local/harbor
vim harbor.yml

14.  Modify the configuration file

vi /data/cert

 15.  Initialize the harbor service

sh install.sh

 16.  Browser access:

https://192.168.56.30/
账号:admin
密码:Harbor12345

Login and mirror pull 

1. Create a project

2. Click on the project name

3. Click Members, Users

4. Select user and maintain user role

5.  Login private server

docker login 私服域名
	输入用户名密码

 6.  Rename the image

docker tag redis:latest 192.68.56.40/dpb/redis-dpb:1.0

 push image

docker push 192.68.56.40/dpb/redis-dpb:1.0

 swarm

        Swarm is a cluster management tool officially provided by Docker. Its main function is to abstract several Docker hosts into a whole, and manage various Docker resources on these Docker hosts through one entry. Swarm is similar to Kubernetes, but lighter and has fewer functions than kubernetes

management node 

 

The management node handles cluster management tasks:

        Maintain cluster state

        Scheduling service

        Service farm mode HTTP API endpoint

        Using the Raft implementation, the manager maintains a consistent internal state of the entire swarm and all services running on it. For testing purposes, it is possible to run a swarm with a single manager. If a manager in a single-manager farm fails, your services will continue to run, but you will need to create a new cluster to recover.

        To take advantage of the fault-tolerant nature of swarm mode, Docker recommends that you implement an odd number of nodes based on your organization's high availability requirements. When you have multiple managers, you can recover from the failure of a manager node without downtime.

        A group of three managers can tolerate the loss of at most one manager.

        A five-manager farm can tolerate a maximum of two simultaneous loss of manager nodes.

        An N-manager cluster can tolerate at most (N-1)/2 loss of managers.

        Docker recommends a maximum of seven manager nodes for a swarm.

work node

        A worker node is also an instance of the Docker engine whose sole purpose is to execute containers. Worker nodes do not participate in the Raft distributed state, do not make scheduling decisions, and do not serve the swarm-mode HTTP API. You can create a swarm consisting of a manager node, but you cannot have a worker node without at least one manager node. By default, all managers are also workers. In a single manager node cluster, you can run a command like docker service create and the scheduler will put all tasks on the local engine.

        To prevent the scheduler from placing tasks on manager nodes in a multi-node farm, set the manager node's availability to Drain. The scheduler gracefully stops tasks on nodes and schedules tasks on active nodes in Drain mode. The scheduler will not assign new tasks to nodes with Drain availability.

Swarm cluster construction

Environmental preparation

Prepare 3 nodes and add two new nodes through vagrant

You need to specify the hostname separately to modify the Vagrantfile file

        config.vm.hostname="work01-node"

In addition, each node needs to have a Docker environment

Build a cluster environment

1. You need to specify the hostname separately to modify the Vagrantfile file

config.vm.hostname="work01-node"

 2. Initialize swram

docker swarm init --advertise-addr 192.168.56.50
docker swarm init --advertise-addr 192.168.56.60
docker swarm init --advertise-addr 192.168.56.70

3. Execute the command on the manager node:

docker swarm init --advertise-addr 192.168.56.50

4. Execute the command on the other two nodes:

docker swarm join --token SWMTKN-1-435rvyvj7py1xiwsycyuawh1tejhtxxr3u9ovldykoijohxqcb-co2m3e2kbo6igvesxj5w5bxva 192.168.56.50:2377
docker swarm join --token SWMTKN-1-435rvyvj7py1xiwsycyuawh1tejhtxxr3u9ovldykoijohxqcb-co2m3e2kbo6igvesxj5w5bxva 192.168.56.50:2377

5.  Execute the command on the manager node to view the node node:

Tomcat Service Orchestration 

Pull the tomcat image

docker pull tomcat

Create a tomcat service

docker service create --name my-tomcat tomcat

View the current swarm service

docker service ls

View service startup log

docker service logs my-tomcat

View service details

docker service inspect my-tomcat

Check which node my-tomcat is running on

docker service ps my-tomcat

Horizontally expand service, expand tomcat to three

docker service scale my-tomcat=3
docker service ls
docker service ps my-tomcat

Log: It can be found that a my-tomcat service is running on other nodes

Go to worker01-node at this time: docker ps, you can find that the name of the container is different from the name of the service, you need to know this

If my-tomcat on a node hangs up, it will automatically expand at this time

delete service

docker service rm my-tomcat

 WordPress in action

1. Create a network

docker network create -d overlay my-overlay-net

 2. create mysql

docker service create --name mysql --mount type=volume,source=v1,destination=/var/lib/mysql --env MYSQL_ROOT_PASSWORD=examplepass --env MYSQL_DATABASE=db_wordpress --network my-overlay-net mysql:5.6

 3.  Create WordPress Service

docker service create --name wordpress --env WORDPRESS_DB_USER=root --env WORDPRESS_DB_PASSWORD=examplepass --env WORDPRESS_DB_HOST=mysql:3306 --env WORDPRESS_DB_NAME=db_wordpress -p 8080:80 --network my-overlay-net wordpress

4. Access test

http:192.168.56.50:8080/wp-admin/

 

Guess you like

Origin blog.csdn.net/weixin_43195884/article/details/128845276