[Reprinted] Probably the most clear article about the concept of Docker

[Reprinted] Original link: https://mp.weixin.qq.com/s/xSbYTJmLuqsyYEDEIsndZw


This article only gives a more detailed introduction to the concept of Docker, and does not involve some installations like Docker environment and some common operations and commands of Docker.

Docker is the world's leading software container platform, so if we want to understand the concept of Docker, we must first start with the container.

First, start with understanding the container

1.1 What is a container?

Let's take a look at the more official explanation of the

container : one sentence to summarize the container: a container is to package software into a standardized unit for development, delivery, and deployment.
  • A container image is a lightweight, executable, independent software package that contains all the content needed for the software to run: code, runtime environment, system tools, system libraries, and settings.
  • Containerized software is suitable for applications based on Linux and Windows, and can run consistently in any environment.
  • Containers give software independence and protect it from external environmental differences (for example, differences in development and rehearsal environments), thereby helping to reduce conflicts between teams running different software on the same infrastructure.


Let's take a look at the more popular explanation of the container:

If you need a popular description of the container, I think the container is a place to store things, just like a schoolbag can hold all kinds of stationery, a closet can hold all kinds of clothes, and a shoe rack can hold all kinds of shoes. same. What we are talking about container storage may be more biased towards applications such as websites, programs and even system environments.
01.png

1.2 Graphical physical machines, virtual machines and containers

The comparison between virtual machines and containers will be introduced in detail later, here is just to deepen everyone's understanding of physical machines, virtual machines and containers through pictures on the Internet.

Physical machine:
02.jpg

virtual machine:
03.jpg

container:
04.jpg

Through the above three abstract diagrams, we can probably generalize it by analogy: Container virtualization is the operating system rather than the hardware, and the same set of operating system resources are shared between containers. Virtual machine technology is to virtualize a set of hardware and run a complete operating system on it. Therefore, the isolation level of the container will be slightly lower.

I believe that through the above explanation, everyone has a preliminary understanding of the unfamiliar and familiar concept of container. Let's talk about some concepts of Docker.


Second, let's talk about some concepts of Docker

05.png

2.1 What is Docker

To be honest about what Docker is, it's not too easy to say, let me explain to you what Docker is in four points.
  • Docker is the world's leading software container platform.
  • Docker uses the Go language launched by Google for development and implementation. It is based on the Linux kernel's cgroup, namespace, and AUFS-like UnionFS technologies to encapsulate and isolate processes, which is a virtualization technology at the operating system level. Since the isolated process is independent of the host and other isolated processes, it is also called a container. Docke was originally implemented based on LXC.
  • Docker can automate repetitive tasks, such as setting up and configuring the development environment, which frees up developers so they can focus on what is really important: building great software.
  • Users can easily create and use containers, and put their own applications into the container. Containers can also carry out version management, copying, sharing, and modification, just like managing ordinary code.


06.jpg

2.2 Docker thought

  • container
  • Standardization: ①transportation method, ②storage method, ③API interface
  • isolation


2.3 Features of Docker containers

  • Lightweight, multiple Docker containers running on a machine can share the operating system kernel of this machine; they can be started quickly and only take up very little computing and memory resources. The image is constructed through the file system layer and shares some common files. In this way, the disk usage can be minimized and the image can be downloaded faster.
  • Standards, Docker containers are based on open standards and can run on all mainstream Linux versions, Microsoft
    Windows, and any infrastructure including VMs, bare metal servers, and clouds.
  • Security, the isolation that Docker gives applications is not limited to isolation from each other, but also independent of the underlying infrastructure. Docker provides the strongest isolation by default, so application problems are only the problem of a single container, and will not affect the entire machine.


2.4 Why use Docker

  • Docker's image provides a complete runtime environment except for the kernel, which ensures the consistency of the application runtime environment, so that there will be no problems such as "This code is okay on my machine";-a consistent runtime environment
  • It can achieve a startup time of seconds or even milliseconds. Greatly save the time of development, testing, and deployment. -Faster start-up time
  • Avoid public servers, resources will be easily affected by other users. ——Isolation
  • Be good at dealing with the pressure of concentrated outbreak of server use;-Elastic scaling and rapid expansion
  • Applications running on one platform can be easily migrated to another platform without worrying about changes in the operating environment causing the application to fail to run normally. ——Easy to migrate
  • Using Docker can achieve continuous integration, continuous delivery, and deployment by customizing application images. -Continuous delivery and deployment

When talking about containers, we have to compare them with virtual machines.

Three, container VS virtual machine

To put it simply: Containers and virtual machines have similar advantages in resource isolation and allocation, but their functions are different. Because the container virtualizes the operating system, not the hardware, the container is easier to transplant and more efficient.

3.1 Comparison of the two

Traditional virtual machine technology is to virtualize a set of hardware, run a complete operating system on it, and then run the required application process on the system; while the application process in the container runs directly on the host's kernel, and the container does not have its own The kernel, and there is no hardware virtualization. Therefore, containers are more portable than traditional virtual machines.
07.png

3.2 Summary of Containers and Virtual Machines (VM)

08.png

  • A container is an application-level abstraction used to package code and dependent resources together. Multiple containers can run on the same machine, sharing the operating system kernel, but each run as an independent process in user space. Compared with a virtual machine, the container occupies less space (the container image size is usually only tens of megabytes), and the startup can be completed in an instant.
  • A virtual machine (VM) is a physical hardware layer abstraction used to turn one server into multiple servers. The hypervisor allows multiple VMs to run on one machine. Each VM contains a complete set of operating systems, one or more applications, necessary binary files and library resources, so it takes up a lot of space. And the VM startup is also very slow.


Through the Docker official website, we know so many advantages of Docker, but there is no need to completely deny the virtual machine technology, because the two have different usage scenarios. Virtual machines are better at completely isolating the entire operating environment. For example, cloud service providers usually use virtual machine technology to isolate different users. And Docker is usually used to isolate different applications, such as front-end, back-end, and database.

3.3 Both containers and virtual machines (VM) can coexist

As far as I am concerned, it does not matter who will replace the two, but the two can coexist in harmony.
09.png

Three very important basic concepts in Docker. After understanding these three concepts, you can understand the entire life cycle of Docker.


Fourth, the basic concepts of Docker

Docker includes three basic concepts:
  • Mirror image (Image)
  • Container
  • Repository


10.jpg

4.1 Image-a special file system

The operating system is divided into kernel and user space. For Linux, after the kernel is started, the root file system will be mounted to provide user space support for it. The Docker image (Image) is equivalent to a root file system.

The Docker image is a special file system. In addition to providing the programs, libraries, resources, configuration and other files required by the container runtime, it also contains some configuration parameters prepared for runtime (such as anonymous volumes, environment variables, users, etc.) ). The image does not contain any dynamic data, and its content will not be changed after it is built.

When Docker was designed, it made full use of the technology of Union FS and designed it as a tiered storage architecture. The mirror is actually composed of a combination of multi-layer file systems.

When mirroring is built, it will be built layer by layer, and the previous layer is the foundation of the latter. After each layer is constructed, there will be no changes, and any changes on the latter layer will only occur on your own layer. For example, the operation of deleting the file of the previous layer does not actually delete the file of the previous layer, but only marking the file as deleted in the current layer. When the final container is running, although this file will not be seen, in fact the file will always follow the image. Therefore, when building a mirror, you need to be extra careful. Each layer should only contain what needs to be added to the layer, and any extra things should be cleaned up before the end of the layer's construction.

The feature of hierarchical storage also makes it easier to reuse and customize mirroring. You can even use the previously built image as the base layer, and then further add new layers to customize what you need and build a new image.

4.2 Container (Container)-the entity that mirrors the runtime

The relationship between the image (Image) and the container (Container) is like the class and instance in object-oriented programming. The image is a static definition, and the container is the entity of the image at runtime. Containers can be created, started, stopped, deleted, suspended, etc.

The essence of a container is a process, but unlike a process that executes directly on the host, a container process runs in its own independent namespace. As mentioned earlier, mirroring uses tiered storage, and so does the container.

The life cycle of the container storage layer is the same as that of the container. When the container dies, the container storage layer also dies. Therefore, any information stored in the storage layer of the container will be lost when the container is deleted.

According to the requirements of Docker's best practices, the container should not write any data to its storage layer, and the container storage layer should remain stateless. All file write operations should use data volumes (Volume) or bind the host directory. Reading and writing at these locations will skip the container storage layer and directly read and write to the host (or network storage). Higher stability. The life cycle of the data volume is independent of the container, and the container dies, and the data volume will not die. Therefore, after using the data volume, the container can be deleted and re-run at will, but the data will not be lost.

4.3 Repository-a place where mirror files are stored centrally

After the image is built, it can be easily run on the current host. However, if we need to use this image on other servers, we need a centralized storage and distribution service for the image. Docker Registry is such a service.

A Docker Registry can contain multiple repositories; each repository can contain multiple tags; each tag corresponds to a mirror. So: The mirror warehouse is a place where Docker uses to store mirror files centrally, similar to the code warehouse we used before.

Usually, a warehouse will contain images of different versions of the same software, and tags are often used to correspond to each version of the software. We can use the format of <warehouse name>:<tag> to specify which version of this software is the mirror image. If no label is given, latest will be used as the default label.

Here is a supplement to the concepts of Docker Registry public service and private Docker Registry:

Docker Registry public service is a Registry service that is open to users and allows users to manage images. Generally, this type of public service allows users to upload and download public images for free, and may provide fee-based services for users to manage private images.

The most commonly used Registry public service is the official Docker Hub, which is also the default Registry, and has a large number of high-quality official mirrors. The website is: hub.docker.com/. Accessing Docker Hub in China may be slower. There are also some cloud service providers in China that provide public services similar to Docker Hub.

In addition to using public services, users can also build a private Docker Registry locally. Docker officially provides a Docker Registry image, which can be used directly as a private Registry service. The open source Docker Registry image only provides the server-side implementation of the Docker Registry API, which is sufficient to support Docker commands without affecting the use. But it does not include graphical interface, and advanced functions such as mirror maintenance, user management, and access control.

The concept of Docker is basically finished, and finally we talk about: Build, Ship, and Run.

5. Last talk: Build, Ship, and Run

If you search the Docker official website, you will find the following words: "Docker-Build, Ship, and Run Any App, Anywhere". So what exactly are Build, Ship, and Run doing?
11.png

  • Build (build image): The image is like a container including files, operating environment and other resources.
  • Ship (transport mirroring): Transport between the host and the warehouse. The warehouse here is like a super terminal.
  • Run: The running image is a container, and the container is the place to run the program.


The Docker running process is to go to the warehouse to pull the image to the local, and then use a command to run the image into a container. Therefore, we often refer to Docker as docker or docker, which is exactly the same as Docker's Chinese translation porter.

Six, summary

This article mainly elaborates some common concepts in Docker, but does not involve the installation of Docker, the use of mirroring, and the operation of containers. This part of things is hoped that readers can master them by reading books and official documents.

Original link: https://mp.weixin.qq.com/s/xSbYTJmLuqsyYEDEIsndZw

Guess you like

Origin blog.csdn.net/michael_f2008/article/details/87267485