Architect's Road-Virtualization Technology and Container Docker

Usually architects introduce virtualization technology in order to improve the utilization of system and hardware resources.

Virtualization is a resource management technology, which can abstract various physical resources and then separate them, thereby breaking the limitations of the physical structure and maximizing the utilization of resources.
Today we will talk about traditional virtualization technology and container technology represented by Docker.

Virtualization

Virtualization is to run multiple "virtual servers" on one physical server. This kind of virtual server is also called a virtual machine (VM, Virtual Machine).

The original intention of virtualization is to reduce the number of physical servers and improve server resource utilization. Use virtualization technology to isolate physical servers into multiple small virtual servers and provide deployment capabilities.

Simply put, virtualization can bring the following conveniences to our work:

  • Reduce the number of physical servers
  • Improve resource utilization
  • Environmental isolation can be achieved
  • Resource isolation can be achieved
  • Virtual machines can be delivered in minutes
  • Virtual machine accessories can be dynamically expanded
  • The compute nodes of the virtual machine can be dynamically migrated

Virtualization technology

We mainly use Hypervisor, also called VMM (Virtual Machine Monitor, virtual machine monitor) to realize server virtualization. Hypervisor is not a specific piece of software, but a general term for a class of software. Like VMware, KVM, Xen, Virtual Box, all belong to the hypervisor.

You should be familiar with VMware, VMware Workstation. When learning Linux, many people install WMware under the windows system and then create Linux virtual machines.
image.png

Linux servers generally use KVM as a virtualization tool. KVM (the full name is Kernel-based Virtual Machine) is a full-featured virtualization solution on the x86 hardware platform under Linux, including a loadable kernel module kvm.ko to provide and virtualize Core architecture and processor specification modules.
image.png

Virtualization platform architecture

The virtualization platform mainly has the following three architectures:

  • Local storage
    image.png
    Virtual machine runs on the physical machine, the disk of the virtual machine is on the disk of the physical machine
  • Centralized storage
    image.png
    The physical machine running the virtual machine, and the virtual machine's disk is in the shared storage. In the centralized storage architecture, if a host hangs up, the VM can be run on another physical machine through platform settings, which realizes the dynamic drift of the virtual machine.
  • Distributed storage In the
    image.png
    distributed storage architecture, the virtual machine's disks are located on all servers. This architecture is also called "computing and storage integration". The virtual machine's disks are broken up into many blocks, which are distributed across all the clusters. On the machine, this maximizes the storage capacity.
    The IO of a virtual machine is no longer limited to the disk of a single machine, but uses the capacity of the disk in the entire cluster. It cuts peaks and fills valleys, improves the IO capability of the virtual machine, and ensures data security through redundancy.
    The disadvantage of this architecture is that it relies heavily on network stability. Once the network goes down, all virtual machines will go down.

High availability mechanism of virtualization platform

The high availability mechanism of the virtualization platform means that the virtual machine can be quickly switched to another physical machine to run after the physical machine where the virtual machine is hung up.
The prerequisite for high availability is that the physical machine is hung up, and there is no detection mechanism for internal faults in the virtual machine. It is an incomplete high availability solution, mainly based on the following two goals:

  • Dynamic migration of virtual machine computing nodes
  • Live migration of virtual machine disks

Virtualization principles

The emergence of virtual machines is to divide some large computing resources into many small resources and then flexibly allocate them. It follows the principle of divide and conquer. If a virtual machine applies for more than half of the resources of a physical machine, this principle is violated. It is straightforward to use a physical machine, so we generally follow the following principles when using virtualization:

  • Resources occupied by a virtual machine do not exceed 40% of the host machine
  • Does not carry disk IO-intensive components (databases, message queues, search engines)
  • For a dual-socket 2U server, it is ideal to control the integration ratio at 1:4-1:10
  • The CPU can be over-allocated to a certain extent, up to about double
  • Memory is generally not overallocated

Docker container

After using virtualization for a period of time, I found some problems with it:

  • The system layer of the virtual machine will occupy more physical machine resources, and the resource utilization of the server needs to be further improved
  • When the virtual machine service program needs to be migrated, the entire virtual machine needs to be migrated, and the migration process is complicated

To solve these problems, we have introduced a container . And Docker , which everyone often hears , is a tool for creating containers and an application container engine.
Containers are also virtualization, but they are "lightweight" virtualization. Its purpose is the same as that of a virtual machine, which is to create an "isolated environment". However, it is very different from virtual machines-virtual machines are operating system-level resource isolation, while containers are essentially process-level resource isolation.

image.png
Virtualization vs container

Virtualization VS Container

Compared with traditional virtual machines, Docker has obvious advantages. Its startup time is very fast, in seconds, and its resource utilization is high (a host can run thousands of Docker containers at the same time). In addition, it occupies a small space, the virtual machine generally needs several GB to tens of GB, and the container only needs MB level or even KB level.

We can see the performance and resource utilization gap between virtualization and containers through the following picture, which is very obvious

image.png

Container Orchestration

Docker can be used to easily create containers, but when the number of containers reaches a certain scale, orchestration tools are needed to manage them, that is, container lifecycle management tools.

Container orchestration tools provide technology for scheduling and managing clusters, and provide basic mechanisms for scalability of container-based applications. These tools use container services and orchestrate them to determine how the containers interact.

There are many container orchestration tools, including Docker Swarm, Kubernetes, Mesos, and Rancher. Here is a picture to compare the characteristics of these container orchestration tools and their respective advantages.

Insert picture description here

summary

Today, I mainly made a simple review of virtualization technology and containers. The original intention and purpose of virtualization and containers are to better improve resource utilization. As for the difference between the two, you must remember one thing: virtual machines are Operating system-level resource isolation, while containers are essentially process-level resource isolation.

For more exciting content, please go to: http://javadaily.cn
Insert picture description here

Guess you like

Origin blog.csdn.net/jianzhang11/article/details/106870892