[Cloud Native] Detailed explanation of Docker (1): From virtual machine to container

Detailed explanation of Docker (1): from virtual machine to container

1. Virtualization

To explain Docker clearly , we must first explain the concept of container (Container). To explain containers, we need to start with the operating system. The operating system is too low-level, and one or two books can't explain it in detail. Here is a summary in one sentence: Operating System (OS for short) is a computer program that manages computer hardware and software resources, and system software that provides general services for software operation.

With the improvement of hardware performance and the richness of software types, two situations have become very common:

  • Excessive hardware performance : The hardware configuration of many computers often spends a lot of time in a state of idle hardware resources. For example, general home computers already have quad-core and six-core configurations, except for 3A 3AAAA Gaming , Video Production,3D 3DExcept for special applications such as 3D rendering and high-performance computing, there are usually 90 % 90\%More than 90% of the time the CPU is idle.
  • Software conflict : Due to business needs, two or more software conflicts, or different versions of the same software are required. For example, a few years ago, I worked as a web front-end to test whether the webpage can be displayed normally on different versions of IE, but Windows can only install one version of IE.

In order to resolve software conflicts, multiple computers can only be configured, or it is troublesome to install multiple operating systems on the same computer. Obviously, these two solutions have their disadvantages: the cost of multiple computers is too high, and the installation and switching of multiple operating systems are very troublesome. When the hardware performance is excessive, the popularization of hardware virtualization is naturally proposed.

The so-called hardware virtualization is a special software that simulates various hardware of one or more computers. Users can install and run operating systems (usually called guest operating systems, Guest OS) and Various applications, and forward the access of the Guest OS and the above application software to the hardware resources to the underlying hardware for implementation.

For the Guest OS and the above applications, this virtual machine is exactly the same as a normal physical computer - except that the performance may be worse. The world's most popular VMware Workstation is such a software, as are Oracle's VirtualBox and Microsoft's Virtual PC. There is a special word for this kind of software in English called Hypervisor ( virtual machine management program ).

insert image description here
Virtualization technology is mainly used to solve the overcapacity of high-performance physical hardware and the reorganization and reuse of old hardware and hardware products with low production capacity, and transparentize the underlying physical hardware, so as to maximize the use of physical hardware.

Advantages of virtual machines

  • Resources can be allocated to different virtual machines to maximize the use of hardware resources.
  • Compared with deploying applications directly on physical machines, virtual machines are easier to expand applications.
  • By virtualizing different physical resources through virtual machines, cloud services can be quickly built.

Disadvantages of virtual machines

The disadvantage of a virtual machine is that the Guest OS usually takes up a lot of hardware resources. For example, if Windows installs VMware and starts the Guest OS without running any applications, it will take up 2 22 ~3G 3G3G memory ,20 2020 ~ 30 G 30G 30 G hard disk space. And in order to apply the running performance of the system, it is often necessary to reserve more memory capacity for each virtual machine. Although many hypervisors support dynamic memory, they basically reduce the performance of virtual machines. Under such resource occupation conditions, a small number of virtual machines is still acceptable. If more than ten or dozens of virtual machines are running at the same time, the waste of hardware resources will increase exponentially. Generally speaking, a considerable part or even all of the Guest OS are the same.

Can all applications use the same operating system to reduce the waste of hardware resources, but can avoid software conflicts including runtime libraries? The concept of operating system layer virtualization - container is proposed to solve this problem. Docker is a standardized implementation of containers.

2. Containerization

Container technology has been around for a long time, eg: LXC, BSD Jails, Solaris Zones…

time name
2014 Rocket
2013 Docker
2011 LMCTFY
2008 Cloud Foundry
2007 AIX, Control groups
2006 Process Containers
2004 Open Solaris Zones
2001 Linux Vserver
2000 FreeBSD Jails
1979 Unix V7

insert image description here
Containerization is application-level virtualization technology . Containers provide a standard way to package an application's code, runtime, system tools, system libraries, and configuration into a single instance. Containers share a kernel (operating system), which is installed on the hardware.

insert image description here
Compared with virtual machines, containers have the following advantages:

  • Fast startup: There is no initialization of the virtual machine hardware and no startup process of the Guest OS, which can save a lot of startup time. This is the "out of the box" of the container.
  • Occupies less resources: There is no memory overhead required to run the Guest OS, no need to reserve running memory for the virtual machine, no need to install and run runtime/operating system services that are not required by the App, and the memory footprint and storage space footprint are much smaller. If a server with the same configuration can run more than a dozen virtual machines, it can usually run hundreds of containers without pressure—of course, the premise is that a single container application itself does not consume too many resources.

3. The origin of Docker

2010 2010 In 2010 , a few young people engaged in IT established a company called dotCloud in San Francisco, USA. dotCloud is a Platform-as-a-Service (PaaS) provider. In terms of underlying technology, the dotCloud platform utilizes the LXC container technology of Linux.
insert image description here
In order to facilitate the creation and management of these containers, dotCloud developed a set of internal tools based on the Go language launched by Google, which was later named Docker. This is how Docker was born.

LXC is the underlying cornerstone of Docker, but in Docker 0.9 0.9At the time of version 0.9 , Docker changed its mind and introduced the Go language-basedLibcontainerbuildexecution driver. WithLibcontainerthis project, Docker no longer needs to depend on Linux parts (LXC,libvirt,systemd-nspawn...) to handlenamespaces,control groups,capabilities,apparmor profiles,network interfaces. Now, LXC becomes optional.

insert image description here
In Docker 1.8 1.8LXC was deprecated in 1.8 , in Docker1.10 1.101.10 , LXC is completely out. Docker launched Libcontainer, which integrates many features of the Linux kernel. As a unique, stable and independent Library, the era of independence has finally arrived.

insert image description here
Like Docker's Logo, the idea of ​​Docker comes from containers . What problem does the container solve? On a large ship, the goods can be neatly arranged, and various goods are standardized by containers, and the containers do not affect each other. Then there would be no need for a special ship for fruit and a special ship for chemicals. As long as these goods are packed in different containers, they can all be transported by one big ship.

After the birth of Docker technology, it did not attract the attention of the industry. And dotCloud, as a small start-up company, is also struggling under the fierce competition.

Just when they were about to lose their hold, the idea of ​​"open source" popped up in their minds. What is "open source"? Open source means open source code. That is to open the source code of the originally confidential program to everyone, and then let everyone participate and contribute code and opinions.

Some software is open source from the beginning. There are also some software that cannot be mixed, and the creators do not want to give up, so they choose to open source. If you can't support yourself, you can eat "hundreds of rice". 2013 2013March 3, 2013March , one of the founders of dotCloud, the father of Docker,on 28 2828 -year-old Solomon Hykes officially decided to open source the Docker project.

insert image description here
If you don't open it, it's nothing, it's amazing when you open it. More and more IT engineers discovered the advantages of Docker, and then flocked to join the Docker open source community. Docker's meteoric rise in popularity has been jaw-dropping.

Open Source Month, Docker 0.1 0.1Version 0.1 released. Every month thereafter, Docker releases a version. to2014 2014June 6, 2014June 999th , Docker1.0 1.0Version 1.0 is officially released.

At this time, Docker has become the most popular open source technology in the industry, not one of them. Even giants like Google, Microsoft, Amazon, and VMware favor it and express their full support.

After Docker became popular, dotCloud simply changed the company name to Docker Inc.

4. Why choose Docker

(1) More efficient use of system resources

Because containers do not require additional overhead such as hardware virtualization and running a full operating system, Docker utilizes system resources more efficiently. Whether it is application execution speed, memory consumption or file storage speed, it is more efficient than traditional virtual machine technology. Therefore, compared with virtual machine technology, a host with the same configuration can often run more applications.

(2) Faster startup time

Traditional virtual machine technology often takes several minutes to start application services, while Docker container applications, because they run directly on the host kernel and do not need to start a complete operating system, can achieve second-level or even millisecond-level startup time. It greatly saves the time of development, testing and deployment.

(3) Consistent operating environment

A common problem during development is environment consistency issues. Due to the inconsistency between the development environment, test environment, and production environment, some bugs were not discovered during the development process. The Docker image provides a complete runtime environment except the kernel, ensuring the consistency of the application runtime environment, so that there will be no more problems like "this code is fine on my machine".

(4) Continuous delivery and deployment

For development and operation and maintenance (DevOps) personnel, the most hope is to create or configure once, and it can run normally anywhere.

Using Docker, you can achieve continuous integration, continuous delivery, and continuous deployment by customizing application images. Developers can use Dockerfile to build images and combine them with Continuous Integration (Continuous Integration) systems for integration testing, while operation and maintenance personnel can quickly deploy the images directly in the production environment, even combined with Continuous Deployment (Continuous Delivery / Deployment) systems for automatic deployment.

Moreover, the use of Dockerfile to make image construction transparent not only enables the development team to understand the application operating environment, but also facilitates the operation and maintenance team to understand the conditions required for application operation, helping to better deploy the image in the production environment.

(5) Easier Migration

Because Docker ensures the consistency of the execution environment, it makes the migration of applications easier. Docker can run on many platforms, whether it is a physical machine, a virtual machine, a public cloud, a private cloud, or even a laptop, the results are the same. Therefore, the user can easily migrate the application running on one platform to another platform without worrying about the situation that the application cannot run normally due to the change of the operating environment.

(6) Easier maintenance and expansion

The layered storage and mirroring technology used by Docker makes it easier to reuse the repeated parts of the application, and also makes the maintenance and updating of the application easier. It is also very simple to further expand the mirroring based on the basic mirroring. In addition, the Docker team maintains a large number of high-quality official images together with various open source project teams, which can be used directly in the production environment or used as a basis for further customization, which greatly reduces the cost of image production for application services.

5. Comparison of containers and virtual machines

The picture below compares the differences between Docker and traditional virtualization methods. It can be seen that containers implement virtualization at the operating system level and directly reuse the operating system of the local host, while the traditional method is implemented at the hardware level.

insert image description here
Compared with traditional virtual machines, Docker has the advantages of fast startup speed and small footprint.

insert image description here
So far, the conceptual content of Docker has been introduced here. Let’s talk about the Docker architecture and its working principle below.


Reference: Docker's past and present

Guess you like

Origin blog.csdn.net/be_racle/article/details/132199189