[Docker] The road to advancement: (1) Development history of container technology

[Docker] The road to advancement: (1) Development history of container technology

What is a container

As an advanced virtualization technology, containers have become the standard infrastructure for software development and operation and maintenance in the cloud native era. Before understanding container technology, let’s first understand virtualization technology.

What is virtualization technology?
The first virtualization technology in computer history was implemented in 1961. The IBM 709 computer divided the CPU usage into multiple extremely short (1/100sec) time slices for the first time, and each time slice used to perform different tasks. By polling these time slices, one CPU can be virtualized or disguised as multiple CPUs, and each virtual CPU can appear to be running at the same time. This is the prototype of a virtual machine.

Why do you need containers?

Visualization technology has become a widely recognized method of sharing server hardware resources. In fact, containers are significantly different from traditional virtual machines.
A virtual machine management system usually needs to virtualize a complete hardware environment for the virtual machine. In addition, the virtual machine usually contains the entire operating system and its applications. From these characteristics, a virtual machine is very similar to a real physical computer. Because the virtual machine contains a complete operating system, the disk capacity occupied by the virtual machine is generally relatively large, usually several GB. If you install a lot of software, it can occupy dozens or even hundreds of GB of disk space. The startup of a virtual machine is relatively slow, usually taking several minutes.

The development history of container technology

After having a general understanding of virtualization technology, we can next understand the history of the birth of containers. Although the concept of containers only became popular around the world after the emergence of Docker, there were countless pioneers exploring this forward-looking virtualization technology before Docker.

Let’s first take a look at the historical chronology of the development of container technology:

  • In 1979, Unix v7 system supported chroot to build an independent virtual file system view for applications.
  • In 1999, FreeBSD 4.0 supported jail, the first commercial OS virtualization technology.
  • In 2004, Solaris 10 supported Solaris Zone, the second commercialized OS virtualization technology.
  • In 2005, OpenVZ was released, a very important pioneer in Linux OS virtualization technology.
  • From 2004 to 2007, Google used OS virtualization technologies such as Cgroups on a large scale internally.
  • In 2006, Google open sourced the process container technology used internally and subsequently changed its name to Cgroup.
  • In 2008, Cgroups entered the Linux kernel mainline.
  • In 2008, the LXC (Linux Container) project had the prototype of a Linux container.
  • In 2011, CloudFoundry developed the Warden system, which is a prototype of a complete container management system.
  • In 2013, Google open sourced its internal container system through Let Me Contain That For You (LMCTFY).

How docker containers work

Docker containers and traditional VMs differ in technical implementation. Figure 1-3 below shows the logical composition of VM and Docker container:

Insert image description here

Guess you like

Origin blog.csdn.net/sinat_36528886/article/details/134891174