[Docker Simple Notes] Sharing of Docker Basic Information

The frequency of docker use is getting higher and higher, so I will share some docker installation and use in some follow-up blogs.

 

1. Introduction to docker  

"Docker was originally an internal project initiated by Solomon Hykes, founder of dotCloud, during his time in France. It is an innovation based on dotCloud's cloud service technology for many years, and was open sourced under the Apache 2.0 license in March 2013. The main project code is in Maintained on GitHub. The Docker project later joined the Linux Foundation and formed the Consortium for Open Containers (OCI).”

"Docker uses the Go language launched by Google for development and implementation. Based on the Linux kernel's cgroup, namespace, and AUFS-like Union FS and other technologies, it encapsulates and isolates processes, which is a virtualization technology at the operating system level."

"Docker further encapsulates the container, from file system, network interconnection to process isolation, etc., which greatly simplifies the creation and maintenance of containers. It makes Docker technology lighter and faster than virtual machine technology. "

 

2. Why use docker?

  1. More efficient use of system resources 

  2. Faster startup time

  3. Consistent operating environment

  4. Continuous Delivery and Deployment

  5. Easier Migration

  6. Easier maintenance and expansion

 

2.1 "Because containers do not require additional overhead such as hardware virtualization and running a complete operating system, Docker utilizes system resources more efficiently. Whether it is application execution speed, memory consumption or file storage speed, it is more efficient than traditional virtual machine technology. Therefore, compared to virtual machine technology, a host with the same configuration can often run a larger number of applications."

2.2 "Traditional virtual machine technology often takes several minutes to start application services, while Docker container applications, because they run directly on the host kernel and do not need to start a complete operating system, can achieve seconds or even millisecond startup time. Save time in development, testing, and deployment.”

2.3 "A common problem in the development process is the environment consistency problem. Due to the inconsistency between the development environment, the test environment, and the production environment, some bugs are not found during the development process. The Docker image provides a complete operation except the kernel. The time environment ensures the consistency of the application running environment, so that there will be no problems such as "this code is fine on my machine". "

2.4 “The most desirable thing for DevOps people is to create or configure once and run anywhere.

Using Docker, you can achieve continuous integration, continuous delivery, and deployment by customizing application images. Developers can build images through Dockerfile, and integrate with continuous integration (Continuous Integration) system for integration testing, while operation and maintenance personnel can quickly deploy the image directly in the production environment, even combined with continuous deployment (Continuous Delivery/Deployment) system Do an automatic deployment.

Moreover, the use of Dockerfile makes the image construction transparent, not only the development team can understand the application running environment, but also facilitates the operation and maintenance team to understand the conditions required for the application to run, helping to deploy the image in a better production environment. "

2.5 "Because Docker ensures the consistency of the execution environment, it makes the migration of applications easier. Docker can run on many platforms, whether it is a physical machine, a virtual machine, a public cloud, a private cloud, or even a laptop, and its running results are consistent Therefore, users can easily migrate applications running on one platform to another platform without worrying about the situation that the application cannot run normally due to changes in the operating environment.”

2.6 "The layered storage and image technology used by Docker makes it easier to reuse the repeated parts of the application, and it also makes the maintenance and update of the application easier, and it is also very simple to further expand the image based on the basic image. In addition, the Docker team also Each open source project team maintains a large number of high-quality official images, which can be used directly in the production environment or further customized as a basis, which greatly reduces the cost of image production for application services.

 

3. Basic Concepts of Docker  

There are three main concepts in docker, "Image", "Container", and "Repository".

 

3.1 Mirror  

"We all know that the operating system is divided into kernel and user space. For Linux, after the kernel is started, the root file system will be mounted to provide user space support for it. The Docker image (Image) is equivalent to a root file system. For example, the official image ubuntu:16.04 contains a complete set of root file systems of the Ubuntu 16.04 minimal system.

Docker image is a special file system. In addition to providing programs, libraries, resources, configuration and other files required for container runtime, it also contains some configuration parameters prepared for runtime (such as anonymous volumes, environment variables, users, etc. ). The image does not contain any dynamic data and its contents will not be changed after construction. "

Tiered storage

Because the image contains the complete root file system of the operating system, its volume is often huge. Therefore, when Docker is designed, it makes full use of the technology of Union FS and designs it as a hierarchical storage architecture. So strictly speaking, an image is not a packaged file like an ISO, an image is just a virtual concept, and its actual manifestation is not composed of a file, but a set of file systems, or a combination of multi-layer file systems. composition.

When the image is built, it will be built layer by layer, and the previous layer is the foundation of the next layer. After each layer is built, it will not change, and any changes on the next layer will only happen to its own layer. For example, the operation of deleting a file in the previous layer does not actually delete the file in the previous layer, but only marks the file as deleted in the current layer. When the final container runs, although this file will not be seen, in fact, the file will always follow the image. Therefore, when building images, extra care is needed, and each layer should try to contain only the things that need to be added to that layer. Any extra things should be cleaned up before the end of the layer's construction.

The feature of tiered storage also makes it easier to reuse and customize images. You can even use a previously built image as a base layer, and then further add new layers to customize what you need to build a new image.

 

3.2 Container 

The relationship between an image and a container is like a class and instance in object-oriented programming. An image is a static definition, and a container is an entity when the image is run. Containers can be created, started, stopped, deleted, suspended, etc.

The essence of a container is a process, but unlike the process executed directly on the host, the container process runs in its own independent namespace. So a container can have its own root filesystem, its own network configuration, its own process space, and even its own user ID space. The processes in the container run in an isolated environment and are used as if they were operating under a system independent of the host. This feature makes container-packaged applications more secure than running directly on the host. Because of this isolation, many people often confuse containers and virtual machines when they are new to Docker.

As mentioned earlier, images use tiered storage, and so do containers. Each container runtime uses the image as the base layer to create a storage layer of the current container on it. We can call this storage layer prepared for reading and writing during container runtime as the container storage layer.

The life cycle of the container storage layer is the same as that of the container. When the container dies, the container storage layer also dies. Therefore, any information stored in the container storage layer will be lost when the container is deleted.

According to the requirements of Docker best practices, containers should not write any data to their storage layer, and the container storage layer should remain stateless. All file writing operations should use a data volume (Volume) or bind the host directory. Read and write in these locations will skip the container storage layer and directly read and write to the host (or network storage). Its performance and Higher stability. The life cycle of the data volume is independent of the container. When the container dies, the data volume does not die. Therefore, after using the data volume, the data will not be lost after the container is deleted or re-run.

3.3 Warehouse 

After the image is constructed, it can be easily run on the current host. However, if the image needs to be used on other servers, we need a centralized service for storing and distributing images. Docker Registry is such a service.

A Docker Registry can contain multiple repositories (Repository); each repository can contain multiple tags (Tag); each tag corresponds to an image.

Usually, a repository will contain images of different versions of the same software, and tags are often used for each version of the software. We can specify the image of which version of this software through the format of <warehouse name>:<label>. If no label is given, latest will be used as the default label.

Take the Ubuntu image as an example, ubuntu is the name of the repository, which contains different version labels, eg, 14.04, 16.04. We can specify which version of the image is required through ubuntu:14.04, or ubuntu:16.04. If the tag is omitted, such as ubuntu, it will be treated as ubuntu:latest.

Repository names often appear in the form of two-segment paths, such as jwilder/nginx-proxy, the former often means the user name in the Docker Registry multi-user environment, and the latter is often the corresponding software name. But this is not absolute and depends on the specific Docker Registry software or service used.

The images used for testing during the learning phase are stored in the "public repository". In public repositories, users can freely upload and download images, and in addition to public repositories, users can also build private repositories for storage.

 

** The above concepts refer to "docker - from entry to practice". It is highly recommended for getting started with learning docker.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324773506&siteId=291194637