Learn about Docker installation and getting started in one article

Detailed explanation of Docker in one article

What is Docker?

Docker is an open source containerization platform designed to simplify the process of deploying, running, and managing applications. It is based on containerization technology, which can package applications and their dependencies into an independent, portable container, thereby enabling applications to run consistently in different environments.

A container is a lightweight, self-contained unit of execution that contains an application and all the software, libraries, environment variables, and configuration files it needs to run. By using Docker, you can package an application and its dependencies into a container image, and then deploy and run this container image in any environment that supports Docker without worrying about differences in the underlying environment.

Docker provides a set of command-line tools and APIs for creating, managing, and running containers. It uses operating system layer virtualization technologies such as Linux Containers (LXC) or more recent open source technologies such as container engine (containerd) and container runtime (container runtime) to achieve efficient containerization.

Using Docker, you can quickly deploy and expand applications, improving the efficiency of development and operation and maintenance. It provides isolation, portability, and reusability, making applications easier to package, deliver, and manage. In addition, Docker also supports network communication and data volume mounting between containers, which facilitates interconnection and data sharing between containers.

In summary, Docker is a containerization platform that provides tools and environments that simplify application deployment and management, allowing applications to run in a consistent manner across different environments .

What are the application scenarios of Docker?

Docker has a wide range of application scenarios. Here are some common application scenarios:

  1. Application packaging and delivery : Docker can package an application and all its dependencies into a container image, ensuring that the application runs consistently in different environments. This makes application deployment and delivery easier and more reliable.
  2. Deployment and scaling of applications : Using Docker, applications can be quickly deployed and scaled. By packaging your application into a container image, you can easily deploy and run your application in any Docker-enabled environment. At the same time, applications can be quickly scaled horizontally as needed to meet growing demand.
  3. Microservices architecture : Docker is great for building and managing microservices architectures. Each microservice can be packaged into an independent container to achieve decoupling and independent deployment between services. By using Docker, it is easier to manage and scale microservices, as well as communicate and collaborate between microservices.
  4. Continuous integration and continuous deployment (CI/CD) : Docker can be combined with continuous integration and continuous deployment processes to achieve automated build, testing and deployment. By using Docker containers, you can create a consistent build and test environment to ensure the consistency of your application at different stages.
  5. Isolation and consistency of development environments : Docker helps developers create lightweight, isolated development environments locally. Developers can use containers to configure and manage the running environment of applications without affecting the host system. This makes setting up and configuring a development environment simpler and more repeatable.
  6. Cross-platform and hybrid cloud deployment : Due to the cross-platform nature of Docker, container images can be deployed in different operating systems and cloud platforms. This makes applications more flexible to migrate and scale across different environments, while also facilitating hybrid cloud deployment and cross-cloud platform management.

In addition to the above application scenarios, Docker can also be used for the creation of test environments, rapid environment construction and sharing, resource isolation and security, etc. Its flexibility and portability make Docker widely used in modern application development and deployment.

Docker basic concepts

Before understanding the basic concepts of Docker, there are several core terms that need to be understood:

  1. Image : A Docker image is a read-only template that contains all file systems, software environments, applications, and dependencies required to run a container. Images are the basis for creating containers. You can obtain existing images from Docker Hub or private repositories, or build custom images through Dockerfiles.
  2. Container : A Docker container is a runnable instance of an image and is an independent, isolated execution environment. A container contains all the files, environment variables, libraries, and configuration required to run an application. You can run applications by starting a container, and you can start, stop, pause, delete, etc. operations on the container.
    1. The relationship between image (Image) and container (Container) is just like the classes and instances in object-oriented programming . The image is a static definition, and the container is the entity when the image is run.
  3. Warehouse (Docker Registry) : Docker Registry provides a centralized, accessible repository for storing and distributing Docker images, allowing users to easily share, download and manage images.
    1. After the image is built, it can be easily run on the current host. However, if we need to use this image on other servers, we need a centralized service for storing and distributing images. Docker Registry is such a service.
    2. A Docker Registry can contain multiple repositories (Repositories); each repository can contain multiple tags; each tag corresponds to an image.
    3. Usually, a warehouse will contain images of different versions of the same software, and tags are often used to correspond to each version of the software
      . We can specify which version of this software is the image through the format of <warehouse name>:<tag>
    4. Docker officially provides a public registry called Docker Hub. On Docker Hub, users can find a large number of public images that can be used directly or used as base images to build their own images.
    5. In addition to Docker Hub, users can also build a private Docker Registry to store and manage their own images to achieve higher security and controllability.
  4. Dockerfile: A Dockerfile is a text file that defines how to build a Docker image. Dockerfile contains a series of instructions and configurations to guide the operations of the Docker engine during the build process, such as base image selection, software installation, file copying, environment variable configuration, etc. Through Dockerfile, the automated image building process can be realized.
  5. Docker Compose: Docker Compose is a tool for defining and running multiple Docker containers. It uses a YAML file to configure the relationships and dependencies between multiple containers, and can start, stop and manage the entire container application with a single command.

These basic concepts form the core of Docker. By understanding them, you can better use and manage Docker containerized environments.

Docker installation

Here we quote relevant external information to list the installation of Docker in the CenOS7 environment:

  1. Vmvare detailed installation tutorial: https://blog.csdn.net/SoulNone/article/details/126681722
  2. Vmvare virtual machine activation key: https://www.bilibili.com/read/cv21151094
  3. CenterOS7 image download: https://blog.csdn.net/a350904150/article/details/129833998
  4. Vmvare安装CenterOS7:https://blog.csdn.net/StupidlyGrass/article/details/128646335
  5. Install Docker in CenterOS7 environment: https://blog.csdn.net/Siebert_Angers/article/details/127315542

Docker image

Docker needs the corresponding image to exist locally before running the container. If the image does not exist locally, Docker will download the image from the image warehouse.

Get image

The command to obtain the image from the Docker image warehouse is docker pull. Its command format is:

$ docker pull [选项] [Docker Registry 地址[:端口号]] 仓库名[:标签]
  • Docker image warehouse address: The format of the address is generally <domain name/IP>[:port number]. The default address is DockerHub
  • Warehouse name: As mentioned before, the warehouse name here is a two-part name, that is, <user name>/<software name>. For DockerHub, if the user name is not given, the default is library, which is the official image.
[root@localhost snow]# docker pull ubuntu:16.04
16.04: Pulling from library/ubuntu
58690f9b18fc: Pull complete 
b51569e7c507: Pull complete 
da8ef40b9eca: Pull complete 
fb15d46c38dc: Pull complete 
Digest: sha256:1f1a2d56de1d604801a9671f301190704c25d604a416f59e03c04f5c6ffee0d6
Status: Downloaded newer image for ubuntu:16.04
docker.io/library/ubuntu:16.04
[root@localhost snow]# 

In the above example, the Docker image warehouse address is not given, so the image will be obtained from Docker Hub. The image name is ubuntu:16.04, so the image labeled 16.04 in the official image library/ubuntu warehouse will be obtained.

From the download process, you can see the concept of tiered storage we mentioned before. The image is composed of multiple layers of storage. Downloading is also done layer by layer, not a single file.

run

Once we have the image, we can start and run a container based on this image. Taking the above ubuntu:16.04 as an example, if we plan to start bash inside and perform interactive operations:

root@localhost snow]# docker run -it --rm ubuntu:16.04 bash
root@3f44186a6166:/# ls -la
total 4
drwxr-xr-x.   1 root root    6 May 27 18:29 .
drwxr-xr-x.   1 root root    6 May 27 18:29 ..
-rwxr-xr-x.   1 root root    0 May 27 18:29 .dockerenv
drwxr-xr-x.   2 root root 4096 Aug  4  2021 bin
drwxr-xr-x.   2 root root    6 Apr 12  2016 boot

docker run is the command to run the container. The detailed command format will be explained in the container section.

List images

To list downloaded images, use the docker image ls command:

root@3f44186a6166:/# exit
exit
[root@localhost snow]# docker image ls
REPOSITORY    TAG       IMAGE ID       CREATED         SIZE
hello-world   latest    9c7a54a9a43c   3 weeks ago     13.3kB
ubuntu        16.04     b6f507652425   21 months ago   135MB

Delete local image

If you want to delete the local image, you can use the docker image rm command, whose format is:

$ docker image rm [选项] <镜像1> [<镜像2> ...]

Among them, <image> can be an image short ID, an image long ID, an image name, or an image summary.

The short ID listed by docker image ls by default is usually the first 3 characters or more, as long as it is enough to distinguish it from other images.

[root@localhost snow]# docker image ls
REPOSITORY    TAG       IMAGE ID       CREATED         SIZE
redis         latest    0ec8ab59a35f   4 days ago      117MB
hello-world   latest    9c7a54a9a43c   3 weeks ago     13.3kB
ubuntu        16.04     b6f507652425   21 months ago   135MB
[root@localhost snow]# docker image rm 0ec
Untagged: redis:latest
Untagged: redis@sha256:f9724694a0b97288d2255ff2b69642dfba7f34c8e41aaf0a59d33d10d8a42687
Deleted: sha256:0ec8ab59a35faa3aaee416630128e11949d44ac82d15d43053f8af5d61182a5d
Deleted: sha256:f89720eafb298773509177d4b8b76a32adda4da1015ca28f52daa03fc6090499
Deleted: sha256:c3b5385e13627e14d553f09a098275d1f1ada0b6228cc30c92e095d669df799c
Deleted: sha256:b830b2806be6a052c6d857f927f72ef18a2539e69fdb6d51cf95d76d7e06c8f1
Deleted: sha256:8de1c0863fac10debb4e19de0cc27639ae97c1111eca0920649b21d97bc8dded
Deleted: sha256:80940c1d5550f3f56f5ab008aa79e899e4d3c9b6b41b9f76077f31dcfb2c482c
Deleted: sha256:8cbe4b54fa88d8fc0198ea0cc3a5432aea41573e6a0ee26eca8c79f9fbfa40e3
[root@localhost snow]# 

If you observe the output information of the above commands, you will notice that the deletion behavior is divided into two categories, one is Untagged and the other is Deleted. As we introduced before, the unique identifier of an image is its ID and summary, and an image can have multiple tags.

When you use the above command to delete an image, you are actually asking to delete the image of a certain tag. So the first thing we need to do is to cancel all the image tags that meet our requirements. This is the Untagged information we see. When all the tags of the image are cancelled, the image is likely to lose the meaning of existence, so the Deleted behavior will be triggered.

commit Understand the image composition

Note: In addition to learning, the docker commit command also has some special applications, such as saving the scene after being invaded. However, do not use docker commit to customize the image. Customizing the image should be done using a Dockerfile . If you want to customize the image please see the next section

Now let us take customizing a web server as an example to explain how the image is built:

[root@localhost snow]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
f03b40093957: Pull complete 
eed12bbd6494: Pull complete 
fa7eb8c8eee8: Pull complete 
7ff3b2b12318: Pull complete 
0f67c7de5f2c: Pull complete 
831f51541d38: Pull complete 
Digest: sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
[root@localhost snow]# docker run --name web1.0 -d -p 80:80 nginx
054d1cdcf5cf2de43116697f2a96e0c50a5138cd753580f55f1966364df155f1
[root@localhost snow]# 

This command will start a container with the nginx image, named web1.0, and map port 80, so that we can use the browser to access the nginx server;

If you are running Docker on Linux, you can directly access: http://localhost; if you are using Docker installed on a virtual machine or cloud server, you need to replace localhost with the virtual machine address or actual cloud server address.

[External link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly (img-Licj8a72-1686898089872) (C:\Users\lixuewen\AppData\Roaming\Typora\typora-user-images\ image-20230527191015639.png)]

Suppose we don't like this welcome page and we want to change it to other text. We can use the docker exec command to enter the container and modify its content.

[root@localhost snow]# docker exec -it web1.0 bash
root@054d1cdcf5cf:/# echo '<h1>Hello,World!</h1>' > /usr/share/nginx/html/index.html
root@054d1cdcf5cf:/# exit
exit
[root@localhost snow]# 

We entered the webserver container in interactive terminal mode and executed the bash command, that is, we obtained an operable Shell; then, we used

Hello, World!

Covers the contents of /usr/share/nginx/html/index.html;

Refresh the browser and the page will appear as follows:

[External link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly (img-N2yya1zd-1686898089873) (C:\Users\lixuewen\AppData\Roaming\Typora\typora-user-images\ image-20230527193042587.png)]

We modified the container's files, which means we changed the container's storage layer. We can see the specific changes through the docker diff command

[root@localhost snow]# docker diff web1.0
C /run
A /run/nginx.pid
C /usr
C /usr/share
C /usr/share/nginx
C /usr/share/nginx/html
C /usr/share/nginx/html/index.html
C /root
A /root/.bash_history
C /etc
C /etc/nginx
C /etc/nginx/conf.d
C /etc/nginx/conf.d/default.conf
C /var
C /var/cache

Docker provides a docker commit command that can save the container's storage layer into an image. In other words, on the basis of the original image, the storage layer of the container is superimposed to form a new image . When we run this new image in the future, we will have the last file changes of the original container:

The syntax format of docker commit is:

$ docker commit [选项] <容器ID或容器名> [<仓库名>[:<标签>]]
[root@localhost snow]# docker commit --author "snow" --message "edit index.html" web1.0 nginx:2.0
sha256:c06e3839c9eb09fee7e4409290e2d51ddeb214fe87a73d9806cd9479ddf2c9ca
[root@localhost snow]# docker image ls nginx
REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
nginx        2.0       c06e3839c9eb   28 seconds ago   143MB
nginx        latest    f9c14fe76d50   2 days ago       143MB
[root@localhost snow]# 

–author specifies the author of the modification, while --message records the content of this modification. This is similar to git version control; we can see this new customized image in docker image ls.

After the new image is customized, we can run the image:

Here we name the new service web2.0 and map it to port 81. We directly access http://ip:81 again and see the result. Its content should be the same as the previously modified web1.0.

[External link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly (img-UdbJXDID-1686898089874) (C:\Users\lixuewen\AppData\Roaming\Typora\typora-user-images\ image-20230527194357988.png)]

So far, we have completed the customized image for the first time. We used the docker commit command to manually add a new layer to the old image to form a new image.

Use docker commit with caution

Although using the docker commit command can help intuitively understand the concept of image tiered storage, it is not used this way in the actual environment.

Using docker commit means that all operations on the image are black box operations. The generated image is also called a black box image. In other words, except for the person who made the image who knows what commands were executed and how to generate the image, others have no way of knowing it. Know

Dockerfile custom image

From the study of docker commit, I learned that the customization of the image is actually the customization of the configuration and files added to each layer. If we can write the commands for modifying, installing, building, and operating each layer into a script, and use this script to build and customize the image, then the problems mentioned before that cannot be repeated, the transparency of image construction, and the volume of The problems will all be solved. This script is the Dockerfile.

Taking the previous customization of the nginx image as an example, this time we use Dockerfile to customize it.

In a blank directory, create a text file and name it Dockerfile:

[root@localhost snow]# mkdir nginxtest
[root@localhost snow]# cd nginxtest
[root@localhost nginxtest]# touch Dockerfile

Add its content through Vim command:

FROM nginx
RUN echo '<h1>Hello, EveryDay!</h1>' > /usr/share/nginx/html/index.html
  • FROM specifies the base image

    The so-called customized image must be based on an image and customized on top of it. Just like we previously ran a
    container with an nginx image and then modified it, the base image must be specified. FROM specifies the base image
    , so FROM is a necessary instruction in a Dockerfile and must be the first instruction .

  • RUN execution command

    The RUN instruction is used to execute command line commands.
    Due to the powerful capabilities of the command line, the RUN command is one of the most commonly used commands when customizing images . There are two formats:

    • shell format: RUN <command>, just like a command entered directly on the command line
    RUN echo '<h1>Hello, Docker!</h1>' > /usr/share/nginx/html/index.html
    
    • exec format: RUN ["executable file", "parameter 1", "parameter 2"], this is more like the format in function calls

Since RUN can execute commands just like a Shell script, can we map each
command to a RUN like a Shell script? For example:

FROM debian:jessie
RUN apt-get update
RUN apt-get install -y gcc libc6-dev make
RUN wget -O redis.tar.gz "http://download.redis.io/releases/redis-3.2.5.tar.gz"
RUN mkdir -p /usr/src/redis
RUN tar -xzf redis.tar.gz -C /usr/src/redis --strip-components=1
RUN make -C /usr/src/redis
RUN make -C /usr/src/redis install

The above writing method creates a 7-layer image. This is completely meaningless; a better way to write it would be as follows:

FROM debian:jessie
RUN buildDeps='gcc libc6-dev make' \
&& apt-get update \
&& apt-get install -y $buildDeps \
&& wget -O redis.tar.gz "http://download.redis.io/releases/redis-3.2.5.tar.gz" \
&& mkdir -p /usr/src/redis \
&& tar -xzf redis.tar.gz -C /usr/src/redis --strip-components=1 \
&& make -C /usr/src/redis \
&& make -C /usr/src/redis install \
&& rm -rf /var/lib/apt/lists/* \
&& rm redis.tar.gz \
&& rm -r /usr/src/redis \
&& apt-get purge -y --auto-remove $buildDeps

Instead of using many RUN pairs to correspond to different commands one by one, only one RUN instruction is used, and && is used to concatenate the required commands. Simplified the previous 7 layers into 1 layer. When writing a Dockerfile, always remind yourself that you are not writing a shell script, but defining how each
layer should be built.

Build image

Now that we understand the contents of this Dockerfile, let's build the image. Execute in the directory where the Dockerfile file is located :

[root@localhost nginxtest]# docker build -t nginx:3.0 .
[+] Building 0.9s (6/6) FINISHED                                                             
 => [internal] load build definition from Dockerfile                                         
 => => transferring dockerfile: 179B                                                         
 => [internal] load .dockerignore                                                           
 => => transferring context: 2B                                                             
 => [internal] load metadata for docker.io/library/nginx:latest                             
 => [1/2] FROM docker.io/library/nginx                                                       
 => [2/2] RUN echo '<h1>Hello, EveryDay!</h1>' > /usr/share/nginx/html/index.html           
 => exporting to image                                                                       
 => => exporting layers                                                                     
 => => writing image sha256:e899e0dd12df606efc99e3bb62b57d99935db376c9b1760b05ed7352a004c37a 
 => => naming to docker.io/library/nginx:3.0                                                            

(Note: Be sure to add a space and a dot at the end of the above build command , otherwise an error will be reported)

Image build context (Context)

If you pay attention, you will see that there is a . at the end of the docker build command. . represents the current directory, and the Dockerfile is in the current directory; the above command format actually specifies the context path. So what is context?

First, you need to understand how docker build works. Docker is divided into Docker engine (that is, server-side daemon) and client-side tools at runtime . The Docker engine provides a set of REST APIs, called the DockerRemote API, and client tools such as docker commands interact with the Docker engine through this set of APIs to complete various functions. Therefore, although on the surface it seems that we are executing various docker functions locally, in fact, everything is done on the server side (Docker engine) using remote calling.

When using the docker build command to build an image, you often need to copy some local files into the image. However, it is not built locally, but on the server side, that is, in the Docker engine. So in this client/server architecture, how can the server obtain local files?

This introduces the concept of context. When building, the user will specify the path to build the image context. After the dockerbuild command learns this path, it will package all the content under the path and then upload it to the Docker engine. In this way, after the Docker engine receives the context package, it will expand and obtain all the files needed to build the image.

Understanding the build context is very important for image building to avoid making unnecessary mistakes. For example, some beginners found that COPY /opt/xxxx /app did not work, so they simply put the Dockerfile in the root directory of the hard disk to build. They found that after docker build was executed, sending a tens of GB of stuff was extremely slow and easy. Build failed. That's because this approach is asking docker build to package the entire hard disk, which is obviously a usage error.

Generally speaking, the Dockerfile should be placed in an empty directory, or in the project root directory. If the required files are not found in this directory, you should copy the required files. If there are things in the directory that you really don't want to pass to the Docker engine during build, you can write a .dockerignore with the same syntax as .gitignore . This file is used to eliminate things that do not need to be passed to the Docker engine as context.

Detailed explanation of Dockerfile instructions

  1. COPY Copy files: COPY <source path>… <destination path>


    1. The COPY instruction will copy the files/directories at <source path> in the build context directory to the <target path> location within the image of the new layer . for example:

      COPY package.json /usr/src/app/
      
    2. <source path> can be multiple, or even a wildcard

    3. <Target path> can be an absolute path within the container or a relative path relative to the working directory (the working directory can be
      specified using the WORKDIR directive)

    4. Using the COPY instruction, various metadata of the source file will be retained. Such as read, write, execution permissions, file change time, etc. This feature is useful for image customization. Especially when build-related files are managed using Git.

  2. ADD is a more advanced copy file: the format and properties of the command and COPY are basically the same. But some functions are added based on COPY

    1. For example, <source path> can be a URL. In this case, the Docker engine will try to download the linked file and put it in the <target path>.

    2. If <source path> is a tar compressed file and the compression format is gzip, bzip2 and xz, the ADD command will automatically decompress the compressed file to <destination path> .

    3. Therefore, when choosing between the COPY and ADD instructions, you can follow this principle. Use the COPY instruction for all file copies, and use ADD only when automatic decompression is required.

  3. CMD container startup command:

    shell format: CMD <command>;

    exec format: CMD ["executable file", "parameter 1", "parameter 2"...]

    When I introduced containers, I once said that Docker is not a virtual machine, but containers are processes. Since it is a process, when starting the container, you need to specify the program and parameters to be run. The CMD instruction is used to specify the default startup command of the container main process.

    When running, you can specify a new command to replace the default command in the image settings . For example, the default CMD of the ubuntu image is /bin/bash. If we directly docker run -it ubuntu, we will directly enter bash. We can also specify other commands to run at runtime, such as docker run -it ubuntu cat /etc/os-release. This means replacing the default /bin/bash command with the cat /etc/os-release command and outputting the system version information.

  4. ENTRYPOINTEntry point

    The purpose of ENTRYPOINT is the same as CMD, which is to specify the container startup program and parameters. ENTRYPOINT can also be replaced at runtime, but it is slightly more cumbersome than CMD and needs to be specified through the parameter --entrypoint of docker run.

    When ENTRYPOINT is specified, the meaning of CMD changes. Instead of running its command directly, the content of CMD is passed to the ENTRYPOINT instruction as a parameter. In other words, when actually executed, it will become: ""

    After having CMD, why do we need ENTRYPOINT?

    1. Let the image be used like a command

    2. Preparations before running the application

  5. ENV sets environment variables: the two formats are as follows:

    ENV
    ENV = =…

    This command is just to set environment variables. Whether it is other subsequent instructions, such as RUN, or runtime applications, you can directly use the environment variables defined here.

    Once the environment variable is defined, this environment variable can be used in subsequent instructions.

  6. ARG build parameters: Format: ARG <parameter name>[=<default value>]

    Build parameters have the same effect as ENV, setting environment variables. The difference is that the environment variables of the build environment set by ARG will not exist when the container is run in the future.

    The ARG directive in the Dockerfile defines parameter names and defines their default values. This default value can be overridden with --build-arg <argument name>=<value> in the build command docker build.

  7. VOLUME defines an anonymous volume

    The format is:
    VOLUME [“<path1>”, “<path2>”…]
    VOLUME <path>

    We have said before that when the container is running, we should try to keep the container storage layer from writing operations . For database applications that need to save dynamic data, their database files should be stored in volumes . We will further introduce Docker volumes in the following chapters. the concept of.

    In order to prevent users from forgetting to mount the directory where dynamic files are saved as volumes during runtime, in the Dockerfile, we can specify certain directories to be mounted as anonymous volumes in advance, so that if the user does not specify mounting at runtime, the application can work normally. Run without writing large amounts of data to the container storage layer.

    VOLUME /data
    

    The /data directory here will be automatically mounted as an anonymous volume at runtime, and any information written to /data will not be recorded in the container storage layer, thus ensuring the statelessness of the container storage layer.

  8. EXPOSE declare port

    The format is: EXPOSE <port1> [<port2>…]

    The EXPOSE directive declares that the runtime container provides a service port. This is just a statement. The application will not open the service of this port at runtime because of this statement.

    Writing such a declaration in the Dockerfile has two benefits:

    1. One is to help image users understand the guard port of this image service to facilitate configuration mapping;

    2. Another use is when using random port mapping at runtime, that is, when docker run -P, the EXPOSE port will be automatically and randomly mapped.

  9. WORKDIR specifies the working directory

    The format is: WORKDIR <working directory path>

    You can use the WORKDIR command to specify the working directory (or current directory). In the future, the current directory of each layer will be changed to the specified directory. If the directory does not exist, WORKDIR will help you create the directory.

    As mentioned before, a common mistake made by some beginners is to write Dockerfile as a Shell script. This wrong understanding
    may also lead to the following errors:

    RUN cd /app
    RUN echo "hello" > world.txt
    

    If you run this Dockerfile to build the image, you will find that the /app/world.txt file cannot be found, or its content is not hello. The reason is actually very simple. In the Shell, two consecutive lines represent the same process execution environment, so the memory state modified by the previous command will directly affect the subsequent command; in the Dockerfile, the execution environments of the two lines of RUN commands are fundamentally different. , are two completely different containers . This is a mistake caused by not understanding the concept of building hierarchical storage using Dockerfile.

    As mentioned before, each RUN starts a container, executes the command, and then commits the storage layer file changes.

Operating Docker containers

Simply put, a container is an application or a group of applications that run independently, as well as their running environment. Correspondingly, a virtual machine can be understood as a complete set of operating systems that simulate running (providing a running environment and other system environments) and applications running on it.

Start container

There are two ways to start a container. One is to create a new container based on the image and start it. The other is to restart the container in the terminated state (stopped).

Create new and start

The required command is mainly docker run .

For example, the following command outputs a "Hello Docker" and then terminates the container:

[root@localhost snow]# docker run ubuntu:16.04 /bin/echo 'Hello Docker'
Hello Docker

The following command starts a bash terminal, allowing user interaction:

[root@localhost snow]# docker run -t -i ubuntu:16.04 /bin/bash
root@d80afe7ea29f:/# ps
   PID TTY          TIME CMD
     1 pts/0    00:00:00 bash
    10 pts/0    00:00:00 ps
root@d80afe7ea29f:/# exit
exit

Among them, the -t option allows Docker to allocate a pseudo-tty (pseudo-tty) and bind it to the container's standard input, and -i keeps the container's standard input open.

When using docker run to create a container, the standard operations Docker runs in the background include:

  1. Check whether the specified image exists locally. If it does not exist, download it from the public warehouse.
  2. Create and start a container using an image
  3. Allocate a file system and mount a read-write layer outside the read-only image layer
  4. Bridge a virtual interface from the bridge interface configured on the host host to the container
  5. Configure an IP address from the address pool to the container
  6. Execute user-specified application
  7. After execution, the container is terminated
Start a terminated container

You can use the docker container start command to directly start a terminated container.

The core of the container is the application being executed, and the resources required are all necessary for the application to run. Apart from this, there are no other resources. You can use ps or top in a pseudo terminal to view process information.

Background process

More often than not, it is necessary to let Docker run in the background instead of directly outputting the results of command execution to the current host.
At this point, this can be achieved by adding the -d parameter.

After starting with the -d parameter, a unique ID will be returned. You can also view container information through the docker container ls command.

[root@localhost snow]# docker container ls
CONTAINER ID   IMAGE       COMMAND                  CREATED      STATUS      PORTS                               NAMES
e015f7987776   nginx:2.0   "/docker-entrypoint.…"   6 days ago   Up 6 days   0.0.0.0:81->80/tcp, :::81->80/tcp   web2.0
054d1cdcf5cf   nginx       "/docker-entrypoint.…"   6 days ago   Up 6 days   0.0.0.0:80->80/tcp, :::80->80/tcp   web1.0

To obtain the output information of the container, you can use the docker container logs command.

Terminate container

You can use docker container stop to terminate a running container.

In addition, when the specified application in the Docker container is terminated, the container is automatically terminated . For example, for the container that only started one terminal in the previous chapter, when the user exits the terminal through the exit command or Ctrl+d, the created container will be terminated immediately.

The terminated container can be seen using the docker container ls -a command. For example

[root@localhost snow]# docker container ls -a
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS                        PORTS                               NAMES
d80afe7ea29f   ubuntu:16.04   "/bin/bash"              9 minutes ago    Exited (0) 9 minutes ago                                          bold_saha
626c2066188c   ubuntu:16.04   "/bin/bash"              10 minutes ago   Exited (127) 10 minutes ago                                       jolly_shockley
791d1f4d5eca   ubuntu:16.04   "/bin/bash"              10 minutes ago   Exited (127) 10 minutes ago                                       boring_margulis
1280e87e1cae   ubuntu:16.04   "/bin/echo 'Hello Do…"   11 minutes ago   Exited (0) 11 minutes ago                                         wonderful_saha

A container in a terminated state can be restarted through the docker container start command.
In addition, the docker container restart command will terminate a running container and then restart it.

Enter the container

When using the -d parameter, the container will enter the background after starting.

Sometimes you need to enter the container for operations, including using the docker attach command or the docker exec command. It is recommended that you use the docker exec command for the reasons explained below.

  1. attach command

    docker attach is a command that comes with Docker. Here's an example of how to use this command:

    [root@localhost snow]# docker container ls
    CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS          PORTS                               NAMES
    5f6884f826e4   ubuntu      "/bin/bash"              23 seconds ago   Up 22 seconds                                       practical_payne
    e015f7987776   nginx:2.0   "/docker-entrypoint.…"   6 days ago       Up 6 days       0.0.0.0:81->80/tcp, :::81->80/tcp   web2.0
    054d1cdcf5cf   nginx       "/docker-entrypoint.…"   6 days ago       Up 6 days       0.0.0.0:80->80/tcp, :::80->80/tcp   web1.0
    [root@localhost snow]# docker attach 5f6
    root@5f6884f826e4:/# exit
    exit
    [root@localhost snow]# docker container ls
    CONTAINER ID   IMAGE       COMMAND                  CREATED      STATUS      PORTS                               NAMES
    e015f7987776   nginx:2.0   "/docker-entrypoint.…"   6 days ago   Up 6 days   0.0.0.0:81->80/tcp, :::81->80/tcp   web2.0
    054d1cdcf5cf   nginx       "/docker-entrypoint.…"   6 days ago   Up 6 days   0.0.0.0:80->80/tcp, :::80->80/tcp   web1.0
    

    Note: Exiting from this stdin will cause the container to stop.

  2. exec command

    Docker exec can be followed by multiple parameters. Here we mainly explain the -i -t parameters.

    When only the -i parameter is used, since no pseudo terminal is allocated, the interface does not have the familiar Linux command prompt, but the command execution
    results can still be returned.

    When the -i -t parameters are used together, you can see the familiar Linux command prompt.

    [root@localhost snow]# docker run -dit ubuntu
    3a7731e1a8a290b59263e9c73695f0a52e57e009505b4652be57c3fbe88a147e
    [root@localhost snow]# docker container ls
    CONTAINER ID   IMAGE       COMMAND                  CREATED         STATUS         PORTS                               NAMES
    3a7731e1a8a2   ubuntu      "/bin/bash"              3 seconds ago   Up 2 seconds                                       hardcore_ptolemy
    e015f7987776   nginx:2.0   "/docker-entrypoint.…"   6 days ago      Up 6 days      0.0.0.0:81->80/tcp, :::81->80/tcp   web2.0
    054d1cdcf5cf   nginx       "/docker-entrypoint.…"   6 days ago      Up 6 days      0.0.0.0:80->80/tcp, :::80->80/tcp   web1.0
    [root@localhost snow]# docker exec -it 3a77 bash
    root@3a7731e1a8a2:/# exit
    exit
    [root@localhost snow]# docker container ls
    CONTAINER ID   IMAGE       COMMAND                  CREATED              STATUS              PORTS                               NAMES
    3a7731e1a8a2   ubuntu      "/bin/bash"              About a minute ago   Up About a minute                                       hardcore_ptolemy
    e015f7987776   nginx:2.0   "/docker-entrypoint.…"   6 days ago           Up 6 days           0.0.0.0:81->80/tcp, :::81->80/tcp   web2.0
    054d1cdcf5cf   nginx       "/docker-entrypoint.…"   6 days ago           Up 6 days           0.0.0.0:80->80/tcp, :::80->80/tcp   web1.0
    

    Exiting from this stdin will not cause the container to stop. This is why it is recommended that everyone use docker exec.

    For more parameter descriptions, please use docker exec --help to view

Export and import containers

Export container:

If you want to export a local container, you can use the docker export command. (This will export the container snapshot to a local file)

[root@localhost snow]# docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS                        PORTS                               NAMES
3a7731e1a8a2   ubuntu         "/bin/bash"              4 minutes ago    Up 4 minutes                                                      hardcore_ptolemy
[root@localhost snow]# docker export 3a77 > ubuntuBash.tar

Import container:

You can use docker import to import from the container snapshot file into an image:

[root@localhost snow]# cat ubuntuBash.tar | docker import - test/ubuntu:test2.0
sha256:1f2b949f67cade289f240a22612630363380d95b43d63bc60e032bbff69bc373
[root@localhost snow]# docker image ls
REPOSITORY    TAG       IMAGE ID       CREATED          SIZE
test/ubuntu   test2.0   1f2b949f67ca   12 seconds ago   77.8MB
nginx         3.0       e899e0dd12df   3 hours ago      143MB

In addition, you can also import by specifying a URL or a directory.

Note: Users can either use docker load to import image storage files into the local image library, or use dockerimport to import a container snapshot into the local image library. The difference between the two is that the container snapshot file discards all historical records and metadata information (that is, only saves the snapshot state of the container at that time), while the image storage file saves complete records and is larger in size.

In addition, metadata information such as tags can be respecified when importing from a container snapshot file.

Delete container

You can use docker container rm to delete a container that is in a terminated state. For example

[root@localhost snow]# docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED             STATUS                           PORTS                               NAMES
3a7731e1a8a2   ubuntu         "/bin/bash"              15 minutes ago      Up 15 minutes                                                        hardcore_ptolemy
5f6884f826e4   ubuntu         "/bin/bash"              19 minutes ago      Exited (0) 18 minutes ago                                            practical_payne
d80afe7ea29f   ubuntu:16.04   "/bin/bash"              About an hour ago   Exited (0) About an hour ago                                         bold_saha
[root@localhost snow]# docker container rm 5f68
5f68

If you want to delete a running container, you can add the -f parameter. Docker will send the SIGKILL signal to the container.

Clean up all terminated containers

Use the docker container ls -a command to view all created containers including the terminated state. If there are too many, it may be troublesome to delete them one by one. Use the following command to clean up all containers in the terminated state.

$ docker container prune

Visit the warehouse

Repository is a place where images are stored centrally.

An easily confused concept is the registration server (Registry). In fact, the registration server is a specific server that manages the warehouse. Each server can have multiple warehouses, and each warehouse has multiple mirrors. In this respect, a repository can be thought of as a specific project or directory. For example, for the warehouse address dl.dockerpool.com/ubuntu, dl.dockerpool.com is the registration server address, and ubuntu is the warehouse name.

Docker Hub

Docker officially maintains a public repository, Docker Hub, which already contains more than 15,000 images. Most requirements can be achieved by downloading the image directly from Docker Hub.

  1. Registration: Register a free Docker account at https://cloud.docker.com

  2. Login: Execute docker login to log in, execute docker logout to log out.

  3. Pull the image: Use the docker search command to find the image in the official warehouse, and use the docker pull command to
    download it locally;

    Mirror resources can be divided into two categories according to whether they are officially provided.

    1. One is an image like centos, called a base image or root image, which is officially provided
    2. Another type, such as the tianon/centos image, is created and maintained by Docker users and is often
      prefixed with the user name.
  4. Push the image: Use the docker push command to push your own image to Docker Hub

  5. Automatic creation: very convenient for those who need to frequently upgrade the programs in the image

    Allows users to specify tracking projects on a target website (currently supporting GitHub or BitBucket) through Docker Hub. Once a new submission occurs in the project or a new tag is created, Docker Hub will automatically build the image and push it to Docker Hub.

Private warehouse

For the address dl.dockerpool.com/ubuntu, dl.dockerpool.com is the registration server address, and ubuntu is the warehouse name.

Docker Hub

Docker officially maintains a public repository, Docker Hub, which already contains more than 15,000 images. Most requirements can be achieved by downloading the image directly from Docker Hub.

  1. Registration: Register a free Docker account at https://cloud.docker.com

  2. Login: Execute docker login to log in, execute docker logout to log out.

  3. Pull the image: Use the docker search command to find the image in the official warehouse, and use the docker pull command to
    download it locally;

    Mirror resources can be divided into two categories according to whether they are officially provided.

    1. One is an image like centos, called a base image or root image, which is officially provided
    2. Another type, such as the tianon/centos image, is created and maintained by Docker users and is often
      prefixed with the user name.
  4. Push the image: Use the docker push command to push your own image to Docker Hub

  5. Automatic creation: very convenient for those who need to frequently upgrade the programs in the image

    Allows users to specify tracking projects on a target website (currently supporting GitHub or BitBucket) through Docker Hub. Once a new submission occurs in the project or a new tag is created, Docker Hub will automatically build the image and push it to Docker Hub.

Private warehouse

Guess you like

Origin blog.csdn.net/weixin_40709965/article/details/131246908