Docker from entry to application (1): Introduction to Docker

Why does Docker appear

  • As a developer, we often encounter a problem, that is, after clicking run, it can run normally on an automatic machine, but unexpected problems often occur when running on other people's machines or in an online environment. Iteration of various versions of the project, the operating environment of each version may also be different. Every time you develop on a new computer, you need to install various environments and software, which is time-consuming and labor-intensive, which is quite troublesome. If you change a computer, you have to start over. Many people think whether the problem can be solved fundamentally. The software can be installed with the environment. That is to say, when installing, the original environment is copied exactly to solve the problem of "can run on my computer" caused by the inconsistent environment. "

what is docker

  • Docker is a cloud open source project based on the Go language. The main goal of Docker is "Build, Ship and Run Any App, Anywhere", which is to make the user's APP (which can be a WEB application or a database application) etc.) and its operating environment can achieve "one package, run everywhere".

  • The emergence of Linux container technology solves such a problem, and Docker is developed on the basis of it. Run the application on the Docker container, and the Docker container is consistent on any operating system, which realizes cross-platform and cross-server. You only need to configure the environment once, and you can deploy it on another machine with one click, which greatly simplifies the operation

    In a word, Docker is a software container that solves the problem of operating environment and configuration. Docker is a container virtualization technology that facilitates continuous inheritance and contributes to the overall release.

Docker virtualization technology

  • Compared with traditional virtualization methods, such as traditional virtual technology, traditional virtual machine technology is to virtualize a set of hardware, run a complete operating system on it, and then run required applications on the system. (vmware is the case), this method takes up a lot of system resources, and there are many redundant steps, because a complete operating system is running, so the startup is also slow.
  • Due to these shortcomings of the above-mentioned virtual machines, Linux has developed another virtualization technology: Linux Containers (Linux Containers, abbreviated as LXC). Instead of simulating a complete operating system, Linux containers isolate processes. With containers, it is possible to package all the resources needed for software to run into an isolated container. Unlike a virtual machine, a container does not need to be bundled with a complete operating system, but only the library resources and settings required for the software to work. The system thus becomes efficient and lightweight and ensures that the software deployed in any environment can run consistently. The application process in the container runs directly on the host's kernel. The container does not have its own kernel, and there is no hardware virtualization. Therefore, containers are more portable than traditional virtual machines.
  • Each container is isolated from each other, each container has its own file system, the processes between the containers will not affect each other, and computing resources can be distinguished

Docker Pros: Build Once, Run Anywhere

  1. Faster application delivery and deployment
    After the traditional application development is completed, a bunch of installation programs and configuration instructions need to be provided. After installation and deployment, complex configurations must be performed according to the configuration documents to run normally. After Dockerization, only a small number of container image files need to be delivered, and the image can be loaded and run in the official production environment. The application installation configuration is already built into the image, which greatly saves the time for deployment configuration and test verification.

  2. More convenient upgrade and expansion
    With the development of micro-service architecture and Docker, a large number of applications will be structured through micro-services, and the development and construction of applications will become like Lego building blocks. Each Docker container will become a "building block". Application upgrades will be very easy. When the existing container is not enough to support business processing, the new container can be quickly expanded by mirroring, so that the expansion of the application system can be changed from the original day level to the minute level or even the second level.

  3. Easier system operation and maintenance
    After the application is containerized, the application running in the production environment can be highly consistent with the application in the development and test environment. The container will completely encapsulate the environment and state related to the application, and will not be affected by the underlying infrastructure and operation. The inconsistency of the system affects the application and generates new bugs. When a program exception occurs, it can also be quickly located and repaired through the same container of the test environment.

  4. More efficient use of computing resources
    Docker is kernel-level virtualization, which does not require additional Hypervisor support like traditional virtualization technologies, so many container instances can run on a physical machine, which can greatly improve the CPU and memory of the physical server utilization rate.

Docker download

NextDocker from entry to application (2): docker installation

Guess you like

Origin blog.csdn.net/Hong_pro/article/details/123177462