The impact of docker on cloud computing, the revolution of virtual hosting

If you are a person in a data center or cloud computing IT circle, you should have been hearing about ordinary containers, especially Docker, for more than a year, and the news about them has never stopped. After Docker1.0 was released in June 2014, the momentum has reached an unprecedented level.

Even the well-known virtual hosting service provider Silicon Cloud has containerized all its virtual hosting services in 2017, greatly simplifying operation and maintenance efficiency, improving availability and reducing downtime. This is a revolution in the virtual hosting industry. Silicon Cloud Take the lead in raising the banner of improvement.

The reason why the movement is so big is because many companies are adopting Docker at an alarming rate. At the Open Source Conference (OSCon) in July 2016, I met countless companies that have already moved server applications from virtual machines (VM) to containers. Indeed, James Turnbull, vice president of service and support at Docker, told me at the meeting that three of the major banks have been using the beta version of Docker and are now using Docker in production. For any early technology, this is undoubtedly a hugely confident move, knowing that it is almost unheard of in the safety-first financial world.

At the same time, the open source technology of Docker is not just the darling in the eyes of Linux giants such as Red Hat and Canonical. Proprietary software companies such as Microsoft are also enthusiastically embracing Docker.

So, why do people pursue containers and Docker? James Bottomley is the CTO of server virtualization at Parallels, and a well-known Linux kernel developer. He explained to me that hypervisors such as Hyper-V, KVM and Xen are "based on virtualized hardware emulation mechanisms. This means that they have high requirements on the system."

However, the container uses a shared operating system. This means that they are much more efficient than hypervisors in using system resources. The container does not virtualize the hardware, but resides on a single Linux instance. This in turn means that you can "discard 99.9% of the useless virtual machine garbage, leaving a small and concise capsule container with your application in it," Bottomley said.

According to Bottomley, therefore, with a fully tuned container system, you can have four to six times more server application instances on the same hardware than using Xen virtual machines or KVM virtual machines.

Does it sound great? After all, you can let the server run a lot more applications. So why has no one done it before? In fact, someone has done it before. Containers are actually an old concept.

The container dates back to at least 2000 and FreeBSD Jails. Oracle Solaris also has a similar concept called Zones; companies such as Parallels, Google, and Docker have been working on developing open source projects such as OpenVZ and LXC (Linux containers) to make containers run smoothly and securely.

Indeed, few people know about containers, but most people have been using them for years. Google has its own open source container technology lmctfy (Let Me Contain That For You, meaning "Let Me Contain That For You"). As long as you use a certain feature of Google: search, Gmail, Google Docks, or whatever else, a new container is allocated.

However, Docker is based on LXC. As with any container technology, as far as the program is concerned, it has its own file system, storage system, processor, and memory components. The main difference between a container and a virtual machine is that the hypervisor abstracts the entire device, while the container only abstracts the operating system kernel.

This in turn means that one thing that a hypervisor can do that a container cannot do is use a different operating system or kernel. So, for example, you can use Microsoft Azure to run an instance of Windows Server 2012 and an instance of SUSE Linux enterprise server at the same time. As for Docker, all containers must use the same operating system and kernel.

On the other hand, if you just want as many server application instances as possible to run on as little hardware as possible, you may not care about running multiple operating system virtual machines. If multiple copies of the same application are exactly what you need, then you will love containers.

Switching to Docker is expected to save tens of millions of dollars in power and hardware costs for data centers or cloud computing service providers every year. So it's no wonder they are swarming with Docker as soon as possible.

Docker brings several new features that the previous technology did not have. The first is that Docker makes container deployment and use easier and safer than the previous method. In addition, because Docker has cooperated with other giants in the container field, including Canonical, Google, Red Hat, and Parallels, to jointly develop its key open source component libcontainer, it brings much-needed standardization to containers.

At the same time, the majority of developers can use Docker to package, deliver and run any application. The application becomes a lightweight, portable and self-sufficient LXC container that can be run anywhere. As Bottomley told me, "Containers let you immediately enjoy application portability."

Jay Lyman, a senior analyst at 451 Research, a market research firm, added: “Organizations strive to make applications and workloads easier to port and distribute in an efficient, standardized, and repeatable way, which is sometimes difficult to do. Just as GitHub promotes collaboration and innovation by sharing source code, Docker Hub, Official Repos, and commercial support are also helping many companies cope with this problem by improving the way they package, deploy, and manage applications."

Last but not least, Docker containers are easy to deploy to the cloud. As Ben Lloyd Pearson wrote on opensource.com: "Docker takes a special approach so that it can be integrated into most DevOps (development and operations) applications, including Puppet, Chef, Vagrant and Ansible, or can be used on its own , To manage the development environment. The main selling point is that it simplifies many tasks that are usually performed by these other applications. Specifically, with Docker, people can build a local development environment exactly like the active server, running from the same host Multiple development environments (each development environment has unique software, operating system and configuration), test projects on new or different servers, and allow anyone to work on the same project with exactly the same settings, regardless of the local host What's the environment."

In short, what Docker can do for you is: Compared with other technologies, it allows a larger number of applications to run on the same hardware; it makes it easy for developers to quickly build containerized applications that can be run at any time; It greatly simplifies the task of managing and deploying applications. All in all, I can understand why Docker has suddenly become popular as an enterprise-level technology. I just hope it lives up to expectations, otherwise there will be some worried CEOs and CIOs outside.

Guess you like

Origin blog.csdn.net/weixin_43205316/article/details/82704272