Dark Horse Microservices--Docker Container Technology Study Notes

write first

Getting to know Docker

What is Docker

  • Microservices seem to have various advantages, but the splitting of services usually brings a lot of trouble to deployment
    • In a distributed system, there are many dependent components, and when deploying between different components, some conflicts often occur
    • Repeated deployment in hundreds or thousands of services, the environment is not necessarily consistent, and various problems will be encountered

Environmental Issues for Application Deployment

  • There are many components in large-scale projects, and the operating environment is also relatively complex. Some problems will be encountered during deployment.
    • Dependencies are complex and prone to compatibility issues
    • Development, testing, and production environments are different

  • For example, in a project, the deployment needs to depend on node.js, Redis, RabbitM, MySQL, etc. The function libraries and dependencies required for the deployment of these services are different, and there will be conflicts in the audit, which brings great difficulties to the deployment. difficulty

Docker solves dependency compatibility issues

  • And Docker has indeed solved these problems ingeniously, so how does Docker achieve it?

  • In order to solve the compatibility problem of dependencies, Docker adopts two methods

    1. Package the application's function library (libs), dependencies (Deps), configuration and application together
    2. Run each application in an isolation 容器to avoid mutual interference
  • Such a packaged application includes not only the application itself, but also the function library and dependencies needed by the application. If these are installed on the operating system out of order, there will naturally be no compatibility issues between different applications.

  • Although the compatibility problem of different applications has been solved, there will be differences in development, testing and other links, as well as differences in operating system versions. How to solve these problems?

Docker resolves operating system environment differences

  • To solve the problem of differences in the environment of different operating systems, you must first understand the structure of the operating system. Taking an Ubuntu operating system as an example, the structure is as follows

    • System applications: applications and function libraries provided by the operating system itself. These data-containing packages are kernel instructions, which are more convenient to use
    • System kernel: The kernel of all Linux distributions is Linux, such as CentOS, Ubuntu, Fedora, etc. The kernel can interact with computer hardware and provide kernel instructions to operate computer hardware.
    • Computer hardware: such as CPU, memory, disk, etc.
  • The process applied to computer interaction is as follows

    1. The application calls the operating system application (function library) to realize various functions
    2. The system function library is the encapsulation of the kernel instruction set, which will call the kernel instruction
    3. Kernel instructions operate computer hardware
  • Both Ubuntu and CentOS are based on the Linux kernel, but the system applications are different, and the function libraries provided are different.

  • At this time, if you install an Ubuntu version of MySQL application to the CentOS system, when MySQL calls the Ubuntu function library, it will find that it cannot find or does not match, and it will report an error

  • How does Docker solve the problems of different system environments?

    • Docker packages the system (such as Ubuntu) function library required by the user program together
    • When Docker runs to different operating systems, it is directly based on the packaged function library and runs with the help of the Linux kernel of the operating system

summary

  • How does Docker solve the compatibility problems of complex dependencies and dependencies of different components in large-scale projects?
    • Docker allows applications, dependencies, function libraries, and configurations 打包to form a portable image during development
    • Docker applications run in containers, use the sandbox mechanism, and interact with each other隔离
  • How does Docker solve the problem of differences in development, testing, and production environments?
    • The Docker scene contains a complete operating environment, including the system function library, and only depends on the Linux kernel of the system, so it can run on any Linux operating system
  • Docker is a technology for quickly delivering and running applications, with the following advantages
    1. The program, its dependencies, and the operating environment can be packaged into a mirror image, which can be migrated to any Linux operating system
    2. The sandbox mechanism is used to form an isolated container during runtime, and each application does not interfere with each other
    3. Both startup and removal can be completed with one line of commands, which is convenient and quick

Difference between Docker and virtual machine

  • Docker can make an application run in any operating system very conveniently, and the virtual machine we have touched before can also run another operating system in one operating system to protect any application in the system

  • What is the difference between the two?

    • A virtual machine (virtual machine) is a hardware device in an operating system 模拟, and then runs another operating system. For example, if you run CentOS system in Windows system, you can run any CentOS application
    • Docker only encapsulates the function library and does not simulate a complete operating system
    • In contrast
characteristic Docker virtual machine
performance close to native poor performance
hard disk usage Usually MB Generally GB
start up second level minute level
  • Summary: Differences between Docker and virtual machines
    • Docker is a system process; the virtual machine is the operating system in the operating system
    • Docker is small in size, fast in startup speed, and good in performance; the virtual machine is large in size, slow in startup speed, and has average performance

Docker Architecture

Images and Containers

  • There are several important concepts in Docker
    • 镜像(Image): Docker packages the application and its required dependencies, function library, environment, configuration and other files together, called a mirror image
    • 容器(Container): The process after the application in the image is formed is 容器just that Docker will isolate the container process and it is invisible to the outside world
  • All applications are ultimately composed of code, which is a file formed by bytes in the hard disk. Only when it is running, will it be loaded into the memory to form a process
  • However 镜像, it is a file package that is packaged together with an application file on the hard disk, the machine operating environment, and some system function library files. This file package is read-only (to prevent you from modifying or polluting the image file, which makes the image unavailable, the container copies a file from the image to its own space to write data)
  • And 容器, it is to load the programs and functions written in these files into the memory to allow the formation of processes, but they need to be isolated. Therefore, an image can be started multiple times to form multiple container processes.

DockerHub

  • There are a lot of open source applications, and packaging these applications is often repetitive work. In order to avoid these repetitive work, people will put their own packaged application images, such as Redis and MySQL images, on the Internet for shared use, just like GitHub code sharing Same

  • On the one hand, we can share our own image to DockerHub, on the other hand, we can also pull the image from DockerHub

Docker Architecture

  • If we want to use Docker to operate images and containers, we must install Docker
  • Docker is a program of CS architecture, which consists of two parts
    • Server (server): Docker daemon process, responsible for processing Docker instructions, managing images, containers, etc.
    • Client (client): Send instructions to the Docker server through commands or RestAPI, and can send instructions to the server locally or remotely

summary

  • 镜像:
    • Package the application together with its dependencies, environment, and configuration
  • 容器:
    • The image runs like a container, and one image can run multiple containers
  • Docker结构:
    • Server: accept commands or remote requests, operate images or containers
    • Client: Send commands or requests to the Docker server
  • DockerHub:
    • A mirror hosting server, similar to the Alibaba Cloud mirror service, collectively referred to as DockerRegistry

Install Docker

  • Docker is divided into two major versions, CE and EE. CE stands for Community Edition, free of charge, with a support period of 7 months; EE stands for Enterprise Edition, which emphasizes security and is paid for, with a support period of 24 months.

  • Docker CE is divided stable testinto nightlythree update channels and .

  • There are installation . Here we mainly introduce the installation of Docker CE on CentOS.

  • Docker CE supports the 64-bit version of CentOS 7, and requires a kernel version not lower than 3.10
    {% note warning no-icon %}
    CentOS 7 meets the minimum kernel requirements. This article also installs Docker on CentOS 7
    {% endnote %}

uninstall (optional)

  • If you have installed an old version of Docker before, you can use the following command to uninstall it
yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \
                  docker-engine \
                  docker-ce

Install Docker

  • First install the yum tool
yum install -y yum-utils \
           device-mapper-persistent-data \
           lvm2 --skip-broken
  • Then update the local mirror source
# 设置Docker镜像源
yum-config-manager \
    --add-repo \
    https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
sed -i 's/download.docker.com/mirrors.aliyun.com\/docker-ce/g' /etc/yum.repos.d/docker-ce.repo

yum makecache fast
  • Then install the community version of Docker
yum install -y docker-ce

start docker

  • Docker applications need to use various ports, and it is very troublesome to modify the firewall settings one by one, so it is recommended to close the firewall directly
# 关闭
systemctl stop firewalld
# 禁止开机启动防火墙
systemctl disable firewalld
  • Start Docker by command
# 启动docker服务
systemctl start docker 

# 停止docker服务
systemctl stop docker

# 重启docker服务
systemctl restart docker
  • Then enter the command to view the docker version
docker -v

result

[root@localhost ~]# docker -v
Docker version 20.10.21, build baeda1f

Configuring Mirroring Acceleration

  • The network speed of Docker's official mirror library is poor. It is recommended to set it as a domestic mirror service. Refer to Alibaba Cloud's mirror acceleration document: https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors

Basic operation of Docker

mirror image

mirror name

  • First look at the name composition of the image:
    • The image name is generally divided into two parts: [repository]:[tag]
      {% note info no-icon %}
      For example mysql:5.7, mysql here is the repository, 5.7 is the tag, and together they are the image name, representing the 5.7 version of the MySQL image
      { % endnote %}
    • When no tag is specified, the default is latest, which represents the latest version of the image, for examplemysql:latest

mirror command

  • Common mirroring commands are shown in the figure below

Case number one

  • In this case, let's get in touch and check the image
    {% note info no-icon %}
    Requirements: Pull an Nginx image from DockerHub and view
    {% endnote %}
  1. First, we search for the Nginx image in the mirror warehouse (such as DockerHub )

  2. According to the viewed image name, pull the image you need, and use the command: docker pull nginxpull the latest nginx image

[root@localhost ~]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
025c56f98b67: Pull complete 
ec0f5d052824: Pull complete 
cc9fb8360807: Pull complete 
defc9ba04d7c: Pull complete 
885556963dad: Pull complete 
f12443e5c9f7: Pull complete 
Digest: sha256:75263be7e5846fc69cb6c42553ff9c93d653d769b94917dbda71d42d3f3c00d3
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest

From the log, we can also see that if no tag is added, the default latest is used, that is, the latest docker image is pulled

  1. docker imagesView the image pulled by command
[root@localhost ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED      SIZE
nginx        latest    3964ce7b8458   4 days ago   142MB

case two

  • In this case, we need to save and import the image
    {% note info no-icon %}
    : use docker savethe nginx image to export the disk, and then docker loadload it back through
    {% endnote %}
  1. Use docker xx --helpthe command to view docker saveand docker loadsyntax
    {% note info no-icon %}
  • Input docker save --help, the result is as follows
[root@localhost ~]# docker save --help

Usage:  docker save [OPTIONS] IMAGE [IMAGE...]

Save one or more images to a tar archive (streamed to STDOUT by default)

Options:
  -o, --output string   Write to a file, instead of STDOUT

Command format:

docker save -o [保存的目标文件名称] [镜像名称]
  • Input docker load --help, the result is as follows
[root@localhost ~]# docker load --help

Usage:  docker load [OPTIONS]

Load an image from a tar archive or STDIN

Options:
  -i, --input string   Read from tar archive file, instead of STDIN
  -q, --quiet          Suppress the load output

Command format:

docker load -i [镜像压缩文件名]

{% endnote %}

  1. Use docker save to export the image to disk, and then use the ls command to view the nginx.tar file
docker save -o nginx.tar nginx:latest
  1. Use docker load to load the image, before that, we use the command to delete the local nginx image
docker rmi nginx:latest # rmi是remove image的缩写

Then run the command to load the local file

docker load -i nginx.tar

practise

{% note info no-icon %}
Requirement: Go to DockerHub to search and pull a Redis image

  1. Go to DockerHub to search for the Redis image
  2. View the name and version of the Redis image
  3. Use docker pullthe command to pull the image
  4. Use docker savethe command to redis:latestpackage it into a redis.tarpackage
  5. docker rmiDelete the local ones usingredis:latest
  6. Utilize docker loadreload redis.tarfile
    {% endnote %}

container operations

Container related commands

  • The container operation command is shown in the figure

  • Three states of container protection

    • running: the process is running normally
    • Suspended: the process is suspended, the CPU is no longer running, and the memory is not released
    • Stop: the process is terminated, and the memory, CPU and other resources occupied by the process are recovered
      {% note info no-icon %}
  • Suspended and stopped operating systems are handled differently. Suspended means that the operating system suspends the process in the container, temporarily stores the memory associated with the container, and then the CPU no longer executes the process, but resumes with the command, the memory space is restored, and the unpauseprogram keep running.

  • Stopping is to directly kill the process, reclaim the memory occupied by the container, and save only the file system of the container, that is, those static resources

  • docker rmThe file system is also completely deleted, that is, the container is completely deleted
    {% endnote %}

  • docker run: Create and run a container, in running state

  • docker pause: Pauses a running container

  • docker unpause: Resume a container from a paused state

  • docker stop: Stop a running container

  • docker start: make a stopped container run again

  • docker rm: delete a container

Case number one

  • Commands to create and run nginx containers
docker run --name containerName -p 80:80 -d nginx
  • command interpretation

    • docker run: create and run a container
    • --name: Give the container a name, for example called myNginx
    • -p: Map the host port to the container port, the left side of the colon is the host port, and the right side is the container port
    • -d: run the container in the background
    • nginx: mirror name, such as nginx
  • The parameter here -pis to map the container port to the host port

  • By default, the container is an isolated environment. If we directly access port 80 of the host, we must not be able to access nginx in the container.

  • Now, port 80 of the container is associated with port 80 of the host. When we access port 80 of the host, it will be mapped to port 80 of the container, so that we can access nginx

  • Then we can enter the virtual machine ip:80 in the browser to see the nginx default page

case two

{% note info no-icon %}
Requirement: enter the Nginx container, modify the content of the HTML file, and add Welcome To My Blog!
a hint: you need to use docker execcommands to enter the container

[root@localhost ~]# docker exec --help

Usage:  docker exec [OPTIONS] CONTAINER COMMAND [ARG...]

Run a command in a running container

Options:
  -d, --detach               Detached mode: run command in the background
      --detach-keys string   Override the key sequence for detaching a container
  -e, --env list             Set environment variables
      --env-file list        Read in a file of environment variables
  -i, --interactive          Keep STDIN open even if not attached
      --privileged           Give extended privileges to the command
  -t, --tty                  Allocate a pseudo-TTY
  -u, --user string          Username or UID (format: <name|uid>[:<group|gid>])
  -w, --workdir string       Working directory inside the container

{% endnote %}

  1. into the container. Enter the nginx container we just created
docker exec -it myNginx bash
  • command interpretation
    • docker exec: Enter the container and execute a command
    • -it: Create a standard input and output terminal for the currently entered container, allowing us to interact with the container
    • myNginx: the name of the container to enter
    • bash: The command executed after entering the container, bash is a linux terminal interactive command
  1. Enter the directory where the HTML of nginx is located

    • Inside the container, an independent Linux file system will be simulated, which looks like a Linux server. The environment, configuration, and running files of nginx are all in this file system, including the html file we want to modify
    • View the nginx page on the DockerHub website, you can know that the html directory of nginx is located in/usr/share/nginx/html
    • We execute the command to enter the directory
    cd /usr/share/nginx/html
    

    View the files in the directory

    root@310016c9b413:/usr/share/nginx/html# ls
    50x.html  index.html
    
  2. Modify the content of index.html

    • There is no vi command in the container and cannot be modified directly. We use the following command to modify
    sed -i -e 's#Welcome to nginx#Welcome To My Blog#g' index.html
    
  3. Visit your own virtual machine ip:80 in the browser, you can see the result (port 80 can not be written)

summary

  • docker runWhat are the common parameters of the command?

    • --name: specify the container name
    • -p: Specify port mapping
    • -d: Let the container run in the background
  • Commands to view container logs

    • docker logs
    • Add -fparameters to continuously view logs
  • View container status:

    • docker ps
    • docker ps -aView all containers, including stopped ones

{% note info no-icon %}
Do you feel that it is troublesome to modify the file now, because there is no vi command provided, and you cannot edit it directly, so this will use the data volume we mentioned below{%
endnote %}

data volume

  • In the previous nginx case, when modifying the html page of nginx, you need to enter the inside of nginx. And because there is no compiler, it is very troublesome to modify the file. This is the consequence of the coupling between the container and the data (files in the container). If we run a new nginx container, then this new nginx container cannot be used directly. good html file with many flaws
    1. Inconvenient to modify: When we want to modify the html content of nginx, we need to enter the container to modify it, which is very inconvenient
    2. The data cannot be used: since the modification in the container is not visible to the outside world, all modifications are not reusable for the newly created container
    3. Difficult to upgrade and maintain: the data is in the container. If you want to upgrade the container, you must delete the old container, then all the data in the old container will also be deleted (including the modified html page)
  • To solve this problem, the data and the container must be decoupled, which requires the use of data volumes

What is a data volume

  • A data volume (volume) is a virtual directory that points to a directory in the host file system

  • Once the data volume is mounted, all operations on the container will be applied to the corresponding host directory. In this way, we operate the /var/lib/docker/volumes/html directory of the host machine, which is equivalent to operating the /usr/share/nginx/html directory in the container

Dataset manipulation commands

  • The basic syntax for data volume operations is as follows
docker volume [COMMAND]
  • docker volumecommandThe command is a data volume operation, and the next step is determined according to the following command
    • create: create a volume
    • inspect: Display information about one or more volumes
    • ls: list all volumes
    • prune: delete unused volume
    • rm: Delete one or more specified volumes

Create and view datasets

{% note info no-icon %}
Requirement: Create a data volume and view the directory location of the data volume on the host machine
{% endnote %}

  1. Create data volume
docker volume create html
  1. view all data
docker volume ls

result

[root@localhost ~]# docker volume ls
DRIVER    VOLUME NAME
local     html
  1. View data volume details volume
docker volume inspect html

result

[root@localhost ~]# docker volume inspect html
[
    {
        "CreatedAt": "2022-12-19T12:51:54+08:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/html/_data",
        "Name": "html",
        "Options": {},
        "Scope": "local"
    }
]

You can see that the host directory associated with the html data volume we created is/var/lib/docker/volumes/html/_data

  • summary:
    • The role of data volumes
      • Separate and decouple the container from the data, facilitate the operation of the data in the container, and ensure data security
    • Data volume operations:
      • docker volume create: create data volume
      • docker volume ls: View all data volumes
      • docker volume inspect: View the details of the data volume, including the location of the associated host directory
      • docker volume rm: delete the specified data volume
      • rocker volume prune: Delete all unused data volumes

mount data volume

  • When we create a container, we can use the -v parameter to mount a data volume to a directory in a container. The command format is as follows
docker run \
    -- name myNginx \
    -v html:/root/html \
    -p 8080:80 \
    nginx \
  • The -v here is the command to mount the data volume
    • -v html:/root/html: Mount the html data volume to the /root/html directory in the container

Case number one

{% note info no-icon %}
Requirement: Create an nginx container and modify the content htmlof the directory in the container Analysis: In the previous case, we entered the nginx container and already knew the location of the nginx html directory . We need to put this The directory is mounted on this data volume for easy manipulation of its contents Tip: When running the container, use the parameter to mount the data volume {% endnote %}index.html
/usr/share/nginx/htmlhtml
-v

  1. Create a container and mount the data volume to the HTML directory in the container
docker run --name myNginx -v html:/usr/share/nginx/html -p 80:80 -d nginx
  1. Enter the location of the html data volume and modify the HTML content
# 查看数据卷位置
docker volume inspect html
# 进入该目录
cd /var/lib/docker/volumes/html/_data
# 修改文件
vi index.html
# 也可以在FinalShell中使用外部编译器(例如VSCode)来修改文件

case two

  • Containers can not only mount data volumes, but also directly mount to the host directory, the relationship is as follows

    • With data volume mode: host directory --> data volume --> container directory
    • Direct mount mode: host directory --> container directory
  • The syntax of directory mount and data volume mount is similar

    • -v [host directory]:[container directory]
    • -v [host file]:[file in container]

{% note info no-icon %}
Requirement: Create and run a MySQL container, mount the host directory directly to the container
{% endnote %}

  1. Pull a MySQL image from DockerHub
docker pull mysql
  1. Create a directory/tmp/mysql/data
mkdir -p /tmp/mysql/data
  1. Create a directory /tmp/mysql/confand myCnf.cnfupload the file to /tmp/mysql/conf
    {% tabs upload cfg file%}
mkdir -p /tmp/mysql/conf
[mysqld]
skip-name-resolve
character_set_server=utf8
datadir=/var/lib/mysql
server-id=1000

{% endtabs %}
4. Go to DockerHub to check the information, and find the location of the conf directory and the data directory in the mysql container. The location of the
conf directory in the container is: /etc/mysql/conf.d
the directory where the data is stored in the container is: /var/lib/mysql
5. Create and run the MySQL container, which requires
- Mount /tmp/mysql/data to the data storage directory in the mysql container
- Mount /tmp/mysql/conf/myCnf.cnf to the configuration file of the mysql container
- Set the MySQL password
BASH docker run \ --name mysql \ -e MYSQL_ROOT_PASSWORD=root \ -v /tmp/mysql/conf:/etc/mysql/conf.d \ -v /tmp/mysql/data:/var/lib/mysql \ -p 3306:3306 \ -d \ mysql
6. Try to use Navicat to connect to the database, pay attention to your own settings password for

summary

  • docker runIn the command, the file or directory is mounted into the container through the -v parameter

    • -v [volume名称]:[容器内目录]
    • -v [宿主机文件]:[容器内文件]
    • -v [宿主机目录]:[容器内目录]
  • The difference between data volume mounting and direct directory mounting

    • The coupling degree of data volume mounting is low, and the directory is managed by docker, but the directory is deep and hard to find
    • The coupling degree of directory mounting is high, we need to manage the directory ourselves, but the directory is easy to find and view

Dockerfile custom image

  • Common images can be found on DockerHub, but we must build images for projects written by ourselves. To customize the mirror, you must first connect the mirror structure.

mirror structure

  • Mirroring is a package of applications and their required system function libraries, environments, configurations, and dependencies

  • Take MySQL as an example to see its mirror composition structure

  • To put it simply, mirroring is a file formed by adding application files, configuration files, dependent files, etc. on the basis of the system function library and operating environment, and then writing a startup script and packaging them together.

  • We want to build a mirror image, which is actually the process of realizing the above packaging

Dockerfile syntax

  • When building a custom image, there is no need to copy and package each file.
  • We only need to tell Docker the composition of our image, which BaseImages are needed, what files need to be copied, what dependencies need to be installed, and what the startup script is. In the future, Docker will help us build images
  • The one that describes the above information is the Dockerfile.
  • A Dockerfile is a text file, which contains instructions (Instructions), which are used to describe what operations to perform to build a mirror image, and each instruction will form a layer of Layer.
instruction illustrate example
FROM Specify the base image FROM centos:6
ENV Set environment variables, which can be used in subsequent instructions ENV key value
COPY Copy the local file to the specified directory of the mirror COPY ./mysql-5.7.rpm /tmp
RUN Execute the Linux shell command, generally the command of the installation process RUN yum install gcc
EXPOSE Specify the port that the container listens to when running, which is for the image user to see EXPOSE 8080
ENTRYPOINT The startup command applied in the image, called when the container is running ENTRYPOINTjava -jar xxjar

build java project

Build a Java project based on Ubuntu

{% note info no-icon %}
Requirement: Build a new image based on the Ubuntu image and run a Java project
{% endnote %}

  1. Create an empty folder docker-demo
mkdir /tmp/docker-demo
  1. Copy the docker-demo.jar file to the docker-demo directory
  2. Copy the jdk8.tar.gz file to the docker-demo directory
  3. Create a new Dockerfile in the docker-demo directory and write the following content
# 指定基础镜像
FROM ubuntu:16.04

# 配置环境变量,JDK的安装目录
ENV JAVA_DIR=/usr/local

# 拷贝jdk的到JAVA_DIR目录下
COPY ./jdk8.tar.gz $JAVA_DIR/

# 安装JDK
RUN cd $JAVA_DIR && tar -xf ./jdk8.tar.gz && mv ./jdk1.8.0_44 ./java8

# 配置环境变量
ENV JAVA_HOME=$JAVA_DIR/java8
ENV PATH=$PATH:$JAVA_HOME/bin

# 拷贝java项目的包到指定目录下,我这里是/tmp/app.jar
COPY ./docker-demo.jar /tmp/app.jar

# 暴露端口,注意这里是8090端口,如果你之前没有关闭防火墙,请关闭防火墙或打开对应端口,云服务器同理
EXPOSE 8090

# 入口,java项目的启动命令
ENTERPOINT java -jar /tmp/app.jar
  1. docker buildUse the command to build the image in the docker-demo directory
docker build -t docker_demo:1.0 .
  1. Use the docker images command to view the image
[root@localhost docker-demo]# docker images
REPOSITORY    TAG       IMAGE ID       CREATED              SIZE
docker_demo   1.0       c8acd2dd02cf   About a minute ago   722MB
redis         latest    29ab4501eac3   2 days ago           117MB
nginx         latest    3964ce7b8458   5 days ago           142MB
ubuntu        16.04     b6f507652425   15 months ago        135MB
mysql         5.7.25    98455b9624a9   3 years ago          372MB
  1. Create and run a docker_demo container
docker run --name testDemo -p 8090:8090 -d docker_demo:1.0
  1. Visit http://192.168.128.130:8090/hello/count with the browser, and you can see the page effect (note that the virtual machine ip is modified)

Build Java projects based on Java8

  • Although we can add any installation packages we need to build the image based on the Ubuntu base image, it is cumbersome. So in most cases, we can modify some basic images with some software installed.
  • The Java project we just built has a fixed dead step, which is to install the JDK and configure the environment variables. We need to complete this step every time we build the image of the Java project, so we can find a JDK that has already been installed. The base image, and then build the image of our Java project on the basis of it

{% note info no-icon %}
Requirement: Based on the java:8-alpine image, build a Java project as an image
{% endnote %}

  1. Create a new empty directory (or continue using /tmp/docker-demoa directory)
  2. Copy docker-demo.jar to this directory (don’t worry about continuing to use the directory just now)
  3. Create a new file in the directory, name it Dockerfile, and write the file (modify it as follows)
# 将openjdk:8作为基础镜像
FROM openjdk:8
# 拷贝java项目的包到指定目录下,我这里是/tmp/app.jar
COPY ./docker-demo.jar /tmp/app.jar
# 暴露端口
EXPOSE 8090
# 入口
ENTRYPOINT java -jar /tmp/app.jar
  1. build image
docker build -t docker_demo:2.0 .
  1. Create and run a docker_demo container (stop the previous docker_demo container before doing this)
docker run --name testDemo02 -p 8090:8090 -d docker_demo:2.0
  1. Browser access http://192.168.128.130:8090/hello/count, you can see the page effect

summary

  1. Dockerfile is essentially a file that describes the construction process of the image through instructions
  2. The first line of the Dockerfile must be FROM to build from a base image
  3. The basic image can be the basic operating system, such as Ubunut, or a good image made by others, such as openjdk:8

Docker-Compose

  • Docker Compose can help us quickly deploy distributed English based on Compose files without manually creating and running containers one by one
  • In real enterprise project development, there may be dozens of

Getting to know Docker Compose for the first time

  • The Compose file is a text file that defines how each container in the cluster runs through instructions. The format is as follows
version: "3.8"
  services:
    # docker run --name mysql -e MYSQL_ROOT_PASSWORD=root -p 3306:3306 -v /tmp/mysql/data:/var/lib/mysql -v /tmp/mysql/conf/myCnf.cf:/etc/mysql/conf.d/myCnf.cnf -d mysql:5.7.25
    mysql:  # 对应docker run中的 --name
      image: mysql:5.7.25 # 对应docker run中最后声明的镜像
      enviroment:   # 对应docker run中的 -e MYSQL_ROOT_PASSWIRD=root
        MYSQL_ROOT_PASSWORD: root
      volumes: # 对应docker run中的 -v /tmp/mysql/data:/var/lib/mysql
        - "/tmp/mysql/data:/var/lib/mysql"
        - "/tmp/mysql/conf/myCnf.cf:/etc/mysql/conf.d/myCnf.cnf"
    # 这里并不需要-d参数来后台运行,因为此种方法默认就是后台运行
    # 同时也不需要暴露端口,在微服务集群部署中,MySQL仅仅是供给给集群内的服务使用的,所以不需要对外暴露端口

    # 临时构建镜像并运行,下面的配置文件包含了docker build和docker run两个步骤
    # docker build -t web:1.0 .
    # docker run --name web -p 8090:8090 -d web:1.0
    web:
      build: .
      ports:
        - "8090:8090"
  • The Compose file above describes a project that contains two containers:
    • mysql: A mysql:5.7.25container built based on the image and mounted with two projects
    • web: an docker buildad-hoc built image container with port mapped to8090
  • For the detailed syntax of DockerCompose, please refer to the official website: https://docs.docker.com/compose/compose-file/
  • In fact, the DockerCompose file can be regarded as writing multiple docker run commands to a file, but the syntax is slightly different

Install Docker Compose

  • Use the command to download under Linux
# 安装
curl -L https://github.com/docker/compose/releases/download/1.23.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
  • Modify file permissions
chmod +x /usr/local/bin/docker-compose
  • Base autocompletion command
curl -L https://raw.githubusercontent.com/docker/compose/1.29.1/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose

If an error occurs Failed connect to raw.githubusercontent.com:443; Connection refused, you need to modify your own hosts file

echo "199.232.68.133 raw.githubusercontent.com" >> /etc/hosts

If a new error occurs TCP connection reset by peer, repeat the command and try several times

Deploy a microservice cluster

{% note info no-icon %}
Requirement: Deploy the previously learned cloud-demo microservice cluster using DockerCompose
{% endnote %}

  • Implementation ideas
    1. Write docker-compose file
    2. Modify your own cloud-demo project, rename the database and nacos address to the service name in docker-compose
    3. Use the maven packaging tool to package each microservice in the project as app.jar (the package name should be the same as in the Dockerfile)
    4. Copy the packaged app.jar to each corresponding subdirectory in cloud-demo, and write the Dockerfile
    5. Upload cloud-demo to the virtual machine and use it docker-compose up -dto deploy

compose file

  • For the cloud-demo we wrote before, write the corresponding docker-compose file
version: "3.2"

services:
  nacos:
    image: nacos/nacos-server
    environment:
      MDOE: standalone
    ports:
      - "8848:8848"
    mysql:
      image: mysql:5.7.25
      environment:
        MYSQL_ROOT_PASSWORD: root
      volumes:
        - "$PWD/mysql/data:/var/lib/mysql"  # 这里的$PWD是执行linux命令,获取当前目录
        - "$PWD/mysql/conf:/etc/mysql/conf.d"
    userservice:
      build: ./user-service
    orderservice:
      build: ./order-service
    gateway:
      build: ./gateway
      poets:
        - "10010:10010"
  • It contains 5 services:
    1. nacos: as a registration center and configuration center
      • image: nacos/nacos-server: build based on nacos/nacos-server image
      • environment: environment variable
        • MODE: standalone: ​​single-point mode startup
      • ports: port mapping, where port 8848 is exposed
    2. mysql: database
      • image: mysql5.7.25: built based on the MySQL image of version 5.7.25
      • environment: environment variable
        • MYSQL_ROOT_PASSWORD: root: set the database root account password to root
      • volumes: mount the data volume, where the data and conf directories of mysql are mounted
    3. userservice: temporarily built based on Dockerfile, userservice does not need to expose the port, the gateway is the entrance of the microservice, if the port of userservice is exposed, then the identity authentication and permission verification of the gateway are useless
    4. orderservice: Temporarily built based on Dockerfile, no need to expose ports, the reason is the same as above
    5. gateway: temporarily built based on Dockerfile, the gateway needs to expose the port, which is the entry point of other microservices

Modify microservice configuration

  • When deploying with Docker Compose, all services can access each other with the service name, so we need to modify the yml configuration file in our cloud-demo now, as follows {% tabs modify the yml configuration file in cloud-deom%
    }
spring:
  cloud:
    nacos:
      # server-addr: localhost:80 #Nacos地址
      server-addr: nacos:8848 # 使用compose中的服务名来互相访问,用nacos替换localhost
      config:
        file-extension: yaml # 文件后缀名
server:
  port: 8081
spring:
  datasource:
    # url: jdbc:mysql://mysql:3306/cloud_user?useSSL=false
    url: jdbc:mysql://mysql:3306/cloud_user?useSSL=false # 这里同理,使用mysql替换localhost
    username: root
    password: root
    driver-class-name: com.mysql.jdbc.Driver

{% endloss %}

Pack

  • Package our modified code, pay attention to modify the pom file and specify the package name as app
<build>
    <!-- 服务打包的最终名称 -->
    <finalName>app</finalName>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
        </plugin>
    </plugins>
</build>
  • Then use the maven tool to package

Copy the jar package to the deployment directory and write the Dockerfile

{% tabs write three dockerfile %}

FROM openjdk:8
COPY ./app.jar /tmp/app.jar
ENTERPOINT java -jar /tmp/app.jar
FROM openjdk:8
COPY ./app.jar /tmp/app.jar
ENTERPOINT java -jar /tmp/app.jar
FROM openjdk:8
COPY ./app.jar /tmp/app.jar
ENTERPOINT java -jar /tmp/app.jar

{% endloss %}

  • The final directory structure is as follows
    • cloud-demo
      • gateway
        • app.jar
        • Dockerfile.yml
      • order-service
        • app.jar
        • Dockerfile.yml
      • user-service
        • app.jar
        • Dockerfile.yml
      • mysql
        • data
        • conf
      • docker-compose.yml

deploy

  • Upload cloud-demo to the virtual machine, enter the directory, and execute the following command
docker-compose up -d
  • Check the log after startup, you will find an error in the log com.alibaba.nacos.api.exception.NacosException: failed to req API:/nacos/v1/ns/instance/list after all servers([nacos:8848]) tried: java.net.ConnectException: Connection refused (Connection refused)
docker-compose logs -f

Alibaba nacos connection failed, the reason is that userservice starts before nacos, and nacos starts too slowly, userservice registration fails, and there is no retry mechanism (after nacos startup is completed, retry registration to avoid this problem)

  • Therefore, it is recommended to start nacos alone first, and start other services later. My solution here is to restart the other three services
  • Restart the gateway userservice orderservice service
docker-compose restart gateway userservice orderserivce 
  • Check the userservice startup log, this time no error will be reported
docker-compose logs -f userservice
  • Open the browser to visit http://192.168.128.130:10010/user/1?authorization=admin, you can also see the data

Docker mirror warehouse

  • Building a mirror warehouse can be implemented based on the DockerRegistry officially provided by Docker
  • Official website address: https://hub.docker.com/_/registry

Build a private mirror warehouse

  • The projects we wrote ourselves are obviously not suitable for putting in Docker's shared warehouse, so we need to build a private server

Configure Docker trust address

  • Our private server uses the http protocol, which is not trusted by Docker by default, so we need to make a configuration:
# 打开要修改的文件
vi /etc/docker/daemon.json
# 添加内容:
"insecure-registries":["http://192.168.128.101:8080"]
# 重加载
systemctl daemon-reload
# 重启docker
systemctl restart docker

Version with GUI

  • Use DockerCompose to deploy DockerRegistry with a graphical interface, the command is as follows:
version: '3.0'
services:
  registry:
    image: registry
    volumes:
      - ./registry-data:/var/lib/registry
  ui:
    image: joxit/docker-registry-ui:static
    ports:
      - 8080:80
    environment:
      - REGISTRY_TITLE=Kyle's Blog私有仓库
      - REGISTRY_URL=http://registry:5000
    depends_on:
      - registry
  • Then open the browser and visit http://192.168.128.130:8080/, you can see the mirror warehouse with a graphical interface

Push and pull images

  • To push an image to a private image service, you must first tag it. The steps are as follows
    1. Re-tag the local mirror, the name prefix is ​​the address of the private warehouse: 192.168.128.130:8080/
    docker tag nginx:latest 192.168.128.130:8080/nginx:1.0
    
    1. push image
    docker push 192.168.128.130:8080/nginx:1.0
    

    3. Pull the image
    docker pull 192.168.128.130:8080/nginx:1.0
    

Summarize

{% link What is Docker – Programmer Xiaohui, https://zhuanlan.zhihu.com/p/187505981, https://static.zhihu.com/heifetz/favicon.ico%}

  • As programmers we should怎样理解docker?

    • container technology起源

      • Assuming that the company is secretly developing the next "Today's Toutiao" APP, let's call it Tomorrow's Toutiao, the programmer builds an environment from beginning to end and starts writing code. After writing the code, the programmer must give the code to the tester for testing. At this time, the test students began to build this environment from beginning to end. Programmers don’t have to worry if there are problems during the test.

      • After the test students finished the test, they could finally go online. At this time, the operation and maintenance students had to build this environment from beginning to end again. It took a lot of effort to build the environment and start going online. Unfortunately, the online system crashed. Programmers with good psychological quality can show their acting skills again, "it can obviously run in other people's environment."

      • From the whole process, we can see that not only did we repeatedly build three sets of environments, but also forced programmers to switch to actors to waste their acting talents. This is a typical waste of time and efficiency. Smart programmers will never be satisfied with the status quo, so it’s time for programmers again. It's time to change the world, and container technology came into being.

      • Some students may say: "Wait, don't change the world yet. We have virtual machines. VMware is so easy to use. Let's set up a virtual machine environment first and then clone it for testing and operation and maintenance. ?”

    • Before there is no container technology, this is indeed a good way, but this way is still没有那么好。

      • Let’s do some popular science first. Now the underlying foundation of cloud computing is virtual machine technology. Cloud computing vendors buy a bunch of hardware and build a data center, and then use virtual machine technology to divide hardware resources. For example, 100 units can be divided. Virtual machines, so that they can be sold to many users.
    • So what about this method 为什么不好?

      • Because the operating system 太笨重is installed, the operating system needs to take up a lot of resources to run. Everyone must have a deep understanding of this. The newly installed system has not deployed anything. A few G's to start with.

      • All we need is a simple application without wasting memory on 无用the operating system for our application.

      • In addition, there is also the problem of startup time. We know that the restart of the operating system is very slow, because the operating system has to detect everything that needs to be detected from the beginning to the end, and load everything that needs to be loaded. minutes, so the OS is still too clunky after all

  • So is there a technology that allows us to obtain the benefits of virtual machines and overcome these shortcomings in one fell swoop 实现鱼和熊掌的兼得?

    • The answer is yes, it is容器技术。
  • 什么是容器

    • The English word for the word container is container. In fact, container also means container. The container is definitely a remarkable invention in the history of commerce, which greatly reduces the cost of ocean trade and transportation. Let's look at the benefits of containers:
      • Containers are isolated from each other
      • long-term repeated use
      • Fast loading and unloading
      • Standard specifications, can be placed in the port and on the ship
    • Back to the container in the software, in fact, the concept of the container and the container is very similar.
    • A major purpose of modern software development is isolation. Applications run independently of each other without interfering with each other. This isolation is not easy to achieve. One of the solutions is the virtual machine technology mentioned above. Deployed in different virtual machines to achieve isolation.
    • But virtual machine technology has the various shortcomings mentioned above, so what about container technology?
    • Different from the isolation of virtual machines through the operating system, container technology only isolates the runtime environment of the application, but the same operating system can be shared between containers. The runtime environment here refers to the various libraries and configurations that the program depends on.
    • Compared with the memory occupation of the operating system, which takes several gigabytes, the container technology only needs a few gigabytes of space, so we can deploy a large number of containers on the hardware of the same specification, which is incomparable to virtual machines, and it is different from the operating system that takes a few minutes. Startup time Containers start almost instantaneously, and container technology provides a more efficient way to package service stacks.
  • 那么我们该怎么使用容器呢?This is about docker.

    • Note that containers are a general technology, and docker is just one implementation.
  • 什么是docker

    • Docker is an open source project implemented in Go language, which allows us to create and use containers conveniently. Docker packages programs and all dependencies of programs into docker containers, so that your programs can have consistent performance in any environment. Here The dependence of program operation is that the container is like a container, and the operating system environment where the container is located is like a cargo ship or a port. The performance of the program is only related to the container (container), and has nothing to do with which cargo ship or which port (operating system) the container is placed on. relation.
    • Therefore, we can see that docker can shield environmental differences, that is to say, as long as your program is packaged into docker, the behavior of the program will be consistent no matter what environment it is running in, and programmers can no longer display their talents. There will be "running in my environment" again, and the real realization of "build once, run everywhere".
    • In addition, another benefit of docker is rapid deployment, which is the most common application scenario for Internet companies. One reason is that the container starts very quickly, and the other reason is that as long as the program in a container is running correctly, you can be sure No matter how much is deployed in the production environment, it can run correctly.
  • 如何使用docker

    • There are several concepts in docker:

      • dockerfile
      • image
      • container
    • In fact, you can simply understand the image as an executable program, and the container is the running process.

    • Then writing a program requires source code, then "writing" an image requires a dockerfile, a dockerfile is the source code of an image, and docker is a "compiler".

    • Therefore, we only need to specify in the dockerfile which programs are needed and what configurations to rely on, and then hand the dockerfile to the "compiler" docker for "compilation", which is the docker build command. The generated executable program is image, and then you can After running this image, this is the docker run command, and after the image is running, it becomes the docker container.

    • The specific method of use will not be repeated here. You can refer to the official documentation of docker, where there are more detailed explanations.

  • docker是如何工作的

    • In fact, docker uses a common CS architecture, which is the client-server model. The docker client is responsible for processing various commands entered by users, such as docker build and docker run. The real work is actually the server, which is the docker demon. It is worth noting Yes, docker client and docker demon can run on the same machine.
  • 接下来我们用几个命令来讲解一下docker的工作流程:

    1. docker build
      • When we finish writing the dockerfile and give it to docker to "compile", we use this command, then the client forwards it to the docker daemon after receiving the request, and then the docker daemon creates an "executable program" image based on the dockerfile.
    2. docker run
      • After you have the "executable program" image, you can run the program. Next, use the command docker run. After receiving the command, the docker daemon finds the specific image, and then loads it into the memory to start execution. When the image is executed, it is called a container.
    3. docker pull
      • In fact, docker build and docker run are the two core commands. If you can use these two commands, you can basically use docker, and the rest are some supplements.
      • So what does docker pull mean?
      • As we said before, the concept of image in docker is similar to "executable program". Where can we download applications written by others? Very simple, that is the APP Store, the application store. Similarly, since image is also an "executable program", is there a "Docker Image Store"? The answer is yes, this is Docker Hub, the official "app store" of docker, where you can download images written by others, so that you don't have to write dockerfile yourself.
      • The docker registry can be used to store various images, and the public warehouse for anyone to download images is the docker hub. So how to download the image from Docker Hub, that is the docker pull command here.
      • Therefore, the implementation of this command is also very simple, that is, the user sends the command through the docker client, and the docker daemon sends an image download request to the docker registry after receiving the command, and stores it locally after downloading, so that we can use the image.
  • 最后,让我们来看一下docker的底层实现

    • Docker provides several functions based on the Linux kernel:
      • NameSpace
        We know that PID, IPC, network and other resources in Linux are global, and the NameSpace mechanism is a resource isolation scheme. Under this mechanism, these resources are no longer global, but belong to a specific NameSpace. The resources under each NameSpace do not interfere with each other, which makes each NameSpace look like an independent operating system, but only the NameSpace is not enough.
      • Although control groups
        have NameSpace technology to achieve resource isolation, processes can still access system resources uncontrolled, such as CPU, memory, disk, network, etc. In order to control the access of processes to resources in containers, Docker uses control groups technology ( That is, cgroup). With cgroup, you can control the consumption of system resources by the process in the container. For example, you can limit the upper limit of memory used by a container, which CPUs it can run on, and so on.
      • With these two technologies, containers really look like standalone operating systems.
  • 总结

    • Docker is a very popular technology at present, and many companies use it in production environments, but the underlying technology that docker relies on has actually appeared a long time ago, and now it is rejuvenated in the form of docker, and can solve the problems it faces very well

Guess you like

Origin blog.csdn.net/qq_33888850/article/details/129770199