Docker introduction and background knowledge

Articles and codes have been archived in [Github warehouse: https://github.com/timerring/backend-tutorial ] or the public account [AIShareLab] can also be obtained by replying to docker .

overview

Why does docker appear

Before configuring an application operating environment on the server, you need to install various software, Java/RabbitMQ/MySQL/JDBC driver packages, etc. Not to mention how troublesome it is to install and configure these things, it is not yet cross-platform. If we installed these environments on Windows, we would have to reinstall them on Linux. Moreover, even if the operating system is not crossed, it is very troublesome to transplant the application to another server with the same operating system.

It is traditionally believed that after software coding development/testing is completed, the output is a program or a binary bytecode that can be compiled and executed (java as an example). In order for these programs to be executed smoothly, the development team must also prepare complete deployment files so that the maintenance and operation team can deploy the application. The development needs to clearly inform the operation and maintenance deployment team of all configuration files + all software environments used. Even so, however, deployment failures often occur. The emergence of Docker has enabled Docker to break the past concept of "programs as applications". Except for the core of the operating system through images, the system environment required to run the application is packaged from bottom to top to achieve seamless cross-platform operation of the application .

Docker Philosophy

Docker is a cloud open source project implemented based on the Go language.

The main goal of Docker is "Build, Ship and Run Any App, Anywhere", which is to make the user's APP (which can be a WEB application or database application etc.) and its operating environment can achieve "mirror once, run everywhere".

To put it simply, it is a software container that solves the problems of the operating environment and configuration , and is a container virtualization technology that facilitates continuous integration and contributes to the overall release.

Containers vs. Virtual Machines

A brief history of container development


Traditional virtual machine technology

Virtual machine (virtual machine) is a solution with environment installation.

It can run another operating system in one operating system, such as running Linux system CentOS 7 in Windows 10 system. The application is not aware of this, because the virtual machine looks exactly the same as the real system, but for the underlying system, the virtual machine is just an ordinary file, which can be deleted when it is not needed, and has no effect on other parts. This type of virtual machine perfectly runs another system, which can keep the logic between the application program, operating system and hardware unchanged.

Disadvantages of virtual machines:

  1. More resource usage 2. More redundant steps 3. Slow startup

container virtualization technology

Due to some shortcomings of the previous virtual machine, Linux has developed another virtualization technology:

Linux Containers (Linux Containers, abbreviated as LXC)

A Linux container is a series of processes isolated from the rest of the system , running from another image that provides all the files needed to support the process. The image provided by the container contains all the dependencies of the application, so it is portable and consistent from development to testing to production. Instead of simulating a complete operating system, Linux containers isolate processes . With containers, it is possible to package all the resources needed for software to run into an isolated container. Unlike a virtual machine, a container does not need to be bundled with a complete operating system, but only the library resources and settings required for the software to work. The system thus becomes efficient and lightweight and ensures that the software deployed in any environment can run consistently.

Docker containers implement virtualization at the operating system level and directly reuse the operating system of the local host, while traditional virtual machines implement virtualization at the hardware level. Compared with traditional virtual machines, Docker has the advantages of fast startup speed and small footprint.

Compared

Compare the differences between Docker and traditional virtualization methods:

  • Traditional virtual machine technology is to virtualize a set of hardware, run a complete operating system on it, and then run the required application process on the system;
  • The application process in the container runs directly on the host's kernel. The container does not have its own kernel and does not perform hardware virtualization. Therefore, containers are more portable than traditional virtual machines.
  • Each container is isolated from each other, and each container has its own file system. The processes between containers will not affect each other, and computing resources can be distinguished.

advantage

Build once, run anywhere

  • Faster application delivery and deployment: After the traditional application development is completed, a bunch of installation programs and configuration instructions need to be provided. After installation and deployment, complex configurations must be performed according to the configuration documents to run normally. After Dockerization, only a small number of container image files need to be delivered, and the image can be loaded and run in the official production environment. The application installation configuration is already built into the image, which greatly saves the time for deployment configuration and test verification.
  • More convenient upgrade and expansion: with the development of microservice architecture and Docker, a large number of applications will be structured through microservices, and the development and construction of applications will become like Lego building blocks. Each Docker container will become a " Building blocks", application upgrades will become very easy. When the existing container is not enough to support business processing, the new container can be quickly expanded by mirroring, so that the expansion of the application system can be changed from the original day level to the minute level or even the second level.
  • Easier system operation and maintenance: After the application is containerized, the application running in the production environment can be highly consistent with the application in the development and test environment. The container will completely encapsulate the application-related environment and state, and will not be affected by the underlying The inconsistency of the operating system affects the application and generates new bugs. When a program exception occurs, it can also be quickly located and repaired through the same container of the test environment.
  • More efficient utilization of computing resources: Docker is kernel-level virtualization , which does not require additional Hypervisor support like traditional virtualization technologies, so many container instances can run on a physical machine, which can greatly improve the CPU and CPU capacity of the physical server. Memory utilization.

reference

  1. Official website: docker official website
  2. Warehouse: Docker Hub official website

Guess you like

Origin blog.csdn.net/m0_52316372/article/details/131886255