Introduction to the basic concepts of Docker

Table of contents

I. Introduction

2. Introduction to virtualization

1. The concept of virtualization

2. The concept of coupling and decoupling

2.1. Coupling

2.2. Coupling

2.3. Decoupling

3. Virtualization

4. Working principle of virtualization

5. Two core components

5.1、QEMU

5.2、KVM

6. Virtualization type

3. Docker overview

1. What is docker?

2. Why iterate from traditional servers to virtualization

3. Docker Features

4. Docker container VS KVM

4.1、KVM:

4.2. Docker container:

4.3 Differences

 5. The three major components of Docker

6. The running logic of the Docker container

7. Docker namespace

8. Control group—cgroups (Control groups)


I. Introduction

The Linux operating system will have a main process that spawns other processes to control different services.

2. Introduction to virtualization

       Virtualization enables multiple virtual machines to run on one physical server. The virtual machines share the CPU, memory, and IO hardware resources of the physical machine, but the virtual machines are logically isolated from each other. Virtualization runs multiple logical computers on one computer at the same time. Each logical computer can run a different operating system, and applications can run in independent spaces without affecting each other, thereby significantly improving the work efficiency of the computer.

       Currently mainstream virtualization technologies include open source XEN, KVM, VMware's ESXi and Microsoft's Hyper-V.

1. The concept of virtualization

Decouple applications and system kernel resources and isolate them at the operating system level to improve resource utilization

2. The concept of coupling and decoupling

2.1. Coupling

       Coupling is also called inter-block linkage. It refers to a measure of the degree of interrelationship between modules in the software system structure. The closer the modules are connected, the stronger the coupling, and the more independent the modules are, the worse the coupling between modules depends on the complexity of the interface between modules, the way of calling and the information transmitted.

2.2. Coupling

       In software engineering, the degree of coupling between objects is the dependency between objects. The higher the coupling between objects, the higher the maintenance cost, so objects should be designed to minimize the coupling between classes and components.

2.3. Decoupling

Decoupling literally means decoupling.

In software engineering, reducing the degree of coupling can be understood as decoupling. If there is a dependency between modules, there must be coupling. In theory, absolute zero coupling cannot be achieved, but some existing methods can be used to minimize the degree of coupling. .

3. Virtualization

Mitigate and resolve resource utilization issues. The performance of the actual instance is relatively more stable than that of virtualization, and its functions are much more powerful.

4. Working principle of virtualization

A software called Hypervisor (virtual machine monitoring program) can effectively separate physical resources and allocate these resources to different virtual environments (that is, tasks that need these resources). A hypervisor may sit on top of an operating system (such as on a laptop), or it may be installed directly on the hardware (such as a server), which is how most businesses use virtualization. The hypervisor takes over the physical resources and divides them up so that the virtual environment can use them.

Resources from the physical environment are partitioned as needed and distributed among many virtual environments. Inside a virtual environment (often referred to as a client or virtual machine), users are able to interact with computing tasks and run computations. A virtual machine runs as a single data file. Like any digital file, a virtual machine can be moved from one computer to another, and it will work the same way when opened on either computer.

While the virtual environment is running, if a user or program issues an instruction to request more resources from the physical environment, the hypervisor passes the request to the physical system and caches the changes, all at near-native speed (especially Yes if the request comes from a KVM-based, open-source hypervisor for kernel-based virtual machines).

5. Two core components

5.1、QEMU

QEMU is an I/O control module, which can be understood as a queue. The core purpose is to call resources in the resource kernel. It is necessary to transport the resources separated by KVM logic to QEMU and then to the virtual machine.

QEMU is an I/O control module, which can be understood as a queue. The core purpose is to call resources in the resource kernel. It is necessary to transport the resources separated by KVM logic to QEMU, and then to the virtual machine.

5.2、KVM

It is used to logically divide physical resources, abstract them into virtualized resources, perform logical division according to the configuration in VMM, and perform virtualization for applications.

Only accepts request commands from QEMU. Sensitive commands coming directly from the application program will be intercepted, and then sent to QEMU through the interface, so that QEMU can judge whether it needs to be executed.

6. Virtualization type

  1. Full virtualization : abstract all physical hardware resources through software, and finally call
  2. Paravirtualization : requires modification of the operating system
  3. Passthrough : use physical hardware resources directly (need support, not yet perfect)

3. Docker container

1. The birth of docker

①. Docker is an open source product of dotcloud company. dotcloud is a newly established company in 2010, mainly based on the PAAS (Platfrom as a Service) platform to provide services for developers

②. Linux Container is a kernel virtualization technology that can provide lightweight virtualization to isolate processes and resources.

③. Docker is an LXC-based advanced container engine open sourced by the PAAS provider dotCloud. The source code is hosted on Github. It is based on the go language and complies with the Apache2.0 protocol.

2. What is docker?

①, is a lightweight "virtual machine"

②. Open source tools for running applications in Linux containers

3. Why iterate from traditional servers to virtualization

Advantage:

①. For traditional servers, the utilization rate can be improved

②. Provide a suitable runtime environment for microservices

③ High isolation (because the virtual machines are completely isolated from the operating system)

④, high security, not prone to avalanches

⑤. More convenient management (controversial)

⑥. It is easier to elastically scale resources

⑦. The initial cost is high, and the latter will be "cheaper" compared with traditional ones

Disadvantages:

① High upfront cost

② High maintenance difficulty

③, the host security requirements are high

④. Not suitable for running applications with extremely high resource requirements (referred to as applications that consume extremely resources)

⑤, sometimes the running cost is higher

4. Docker Features

  1. Flexible : Even the most complex applications can be containerized.
  2. Lightweight : Containers utilize and share the host kernel.
  3. Interchangeable : Updates and upgrades can be deployed on the fly.
  4. Portable : Can be built locally, deployed to the cloud, and run anywhere.
  5. Scalable : Container replicas can be added and automatically distributed.
  6. Stackable : Servings can be stacked vertically and instantly.

5. Docker container VS KVM

5.1、KVM:

Use the Hypervisor to provide a running platform for virtual machines and manage the running of the operating system in each VM. Each VM must have its own operating system, applications, and necessary dependent files.

5.2. Docker container:

Using the Docker engine for scheduling and isolation improves resource utilization and allows more container instances to run under the same hardware capability; each container has its own isolated user space

5.3 Differences

difference Docker container VM
startup speed second level minute level
running performance Near-native (runs directly in the kernel) about 50% loss
disk usage MB GB        
quantity hundreds of thousands Generally dozens of units
isolation Process System level (more thorough)
operating system As long as it supports linux Almost all
Packing degree Only package the project code and dependencies, share the host kernel Completed operating system, isolated from the host

6. The three major components of Docker

1. Mirroring: Docker mirroring is a special file system. In addition to providing programs, libraries, resources, configuration and other files required for container runtime, it also includes some configuration parameters prepared for runtime. Images do not contain any dynamic data, and their contents are not changed after they are built. After the Docker image runs, it becomes a container (docker run)

2. Container: The running instance created by the image, Docker uses the container to run the application. Each container is an isolated and secure platform. We can think of the container as a lightweight Linux operating environment.

3. Mirror warehouse: a place where mirror files are stored centrally. After the user creates the image, he can upload it to the public warehouse or private warehouse. When he needs to use the image on another host, he only needs to download it from the warehouse.

7. The running logic of the Docker container

Docker uses the client/server (C/S) architecture mode, and the Docker daemon (Docker daemon) as the Server side receives requests from Docker clients and is responsible for creating, running and distributing Docker containers. The Docker daemon generally runs in the background of the Docker host, and users use the Docker client to directly interact with the Docker daemon.

① As shown in the orange process, executing the Docker build command will build a mirror image based on the Docker file and store it on the local Docker host.

②, as shown in the blue process, there is no required image locally, and the Docker pull command will pull the image from the cloud image warehouse to the local Docker host.

③ As shown in the black process, executing the Docker start command will install the existing local image to the container and start the container.

respective functions

Docker client: a client used to establish communication with the Docker daemon (Docker Daemon)

Docker host: a physical or virtual machine used to execute the Docker daemon and containers

Docker daemon: Receive and process requests sent by Docker clients, monitor Docker API requests and manage Docker objects, such as images, containers, networks, and data volumes

Everyone sums up the vernacular:

8. Docker namespace

Docker realizes the isolation between the container and the host through six namespace isolations.
View the six namespace locations

mount(mnt) (mount point) file system
user The user and user group that operates the process
pid process number
uts Hostname and domain name
ipc Semaphore, message queue, shared memory
net Network devices, network protocol stacks, ports, etc.

9. Control group—cgroups (Control groups)

The six namespaces are managed by cgroups. The last centos cgroups management version is version 3.8, 3.6 and 3.5 cannot be used

Docker restricts the use of host resources by containers through cgroups

Four functions of cgroup :

①. Resource limitation: cgroup can limit the total resources used by the process group

②. Priority allocation: through the number of allocated cpu time slices and the size of hard disk IO bandwidth, it is actually equivalent to controlling the priority level of process running

③. Resource statistics: cgroup can count system resource usage, such as cpu usage time, memory usage, etc., for billing by volume. At the same time, it also supports the suspend function, which means that all resources are restricted through cgroups, and resources cannot be used. Note that it does not mean that our program cannot be used, but that resources cannot be used and are in a waiting state

④, process control: you can perform operations such as suspend and resume on the process group

10. The underlying principle of docker

cgroups and namespaces

Guess you like

Origin blog.csdn.net/m0_62948770/article/details/127286723