[Reprint] Full of dry goods! Understand Docker and K8S in 10 minutes

In 2010, a few young people engaged in IT established a company called "dotCloud" in San Francisco, USA.

This company mainly provides PaaS-based cloud computing technology services. Specifically, container technology related to LXC.

LXC is Linux container virtualization technology (Linux container)

Later, dotCloud simplified and standardized its own container technology and named it Docker.

After the birth of Docker technology, it did not attract the attention of the industry. And dotCloud, as a small start-up company, is also struggling under the fierce competition.

Just when they were about to lose their hold, the idea of ​​"open source" popped up in their minds.

What is "open source"? Open source means open source code. That is to open the source code of the originally confidential program to everyone, and then let everyone participate and contribute code and opinions.

Open Source, open source

Some software is open source from the beginning. There are also some software that cannot be mixed, and the creators do not want to give up, so they choose to open source. If you can't support yourself, just eat "hundreds of rice".

In March 2013, 28-year-old Solomon Hykes, one of the founders of dotCloud and the father of Docker, officially decided to open source the Docker project.

Solomon Hykes (just left Docker this year)

If you don't open it, it's nothing, it's amazing when you open it.

More and more IT engineers discovered the advantages of Docker, and then flocked to join the Docker open source community.

Docker's popularity has risen rapidly, and the speed is jaw-dropping.

In the month of open source, Docker version 0.1 was released. Every month thereafter, Docker releases a version. By June 9, 2014, Docker version 1.0 was officially released.

At this time, Docker has become the most popular open source technology in the industry, not one of them. Even giants like Google, Microsoft, Amazon, and VMware all favor it and express their full support.

After Docker became popular, dotCloud simply changed the company name to Docker Inc.

Why are Docker and container technology so popular? To put it bluntly, it is because it is "light".

Before container technology, the Internet celebrity in the industry was a virtual machine. Representatives of virtual machine technology are VMWare and OpenStack .

I believe that many people have used virtual machines. A virtual machine is to install a software in your operating system, and then simulate one or more "sub-computers" through this software.

Virtual machines, similar to "subcomputers"

In the "sub-computer", you can run programs like a normal computer, such as opening QQ. If you want, you can conjure up several "sub-computers" with QQ running on them. The "sub-computer" and "sub-computer" are isolated from each other and do not affect each other.

A virtual machine is a virtualization technology. Container technology such as Docker is also a virtualization technology, which belongs to lightweight virtualization .

Although virtual machines can isolate many "sub-computers", they take up more space, start slower, and virtual machine software may cost money (such as VMWare).

Container technology happens to have none of these drawbacks. It does not need to virtualize the entire operating system, but only needs to virtualize a small-scale environment (similar to a "sandbox").

sandbox

It has a fast startup time and completes in seconds. Moreover, it has a high utilization rate of resources (one host can run thousands of Docker containers at the same time). In addition, it takes up very little space. Virtual machines generally require a few GB to tens of GB of space, while containers only require MB or even KB.

Comparing containers and virtual machines

Because of this, container technology has been warmly welcomed and sought after, and has developed rapidly.

Let's look at Docker specifically.

Everyone needs to pay attention, Docker itself is not a container , it is a tool for creating containers and an application container engine.

If you want to understand Docker, just look at its two slogans.

The first sentence is " Build, Ship and Run ".

That is, "build, send, run", three tricks.

for example:

I came to a vacant lot and wanted to build a house, so I moved stones, chopped wood, drew blueprints, and after a while, I finally built the house.

As a result, I lived for a while and wanted to move to another vacant lot. At this time, according to the previous methods, I can only move stones, chop wood, draw blueprints, and build houses again.

However, an old witch came and taught me a kind of magic.

This kind of magic can make a copy of the house I built, make a "mirror image", and put it in my backpack.

When I get to another open space, I will use this "mirror image" to copy a house, put it there, and move in with my bags.

How about it? Isn't it amazing?

Therefore, the second slogan of Docker is: " Build once, Run anywhere (build once, use everywhere) ".

The three core concepts of Docker technology are:

  • mirror image

  • Container

  • Warehouse (Repository)

In my example just now, the "image" placed in the package is the Docker image . And my backpack is the Docker warehouse . I am in the open space, and the house I built with magic is a Docker container .

To put it bluntly, this Docker image is a special file system. In addition to providing the programs, libraries, resources, configuration and other files required for container runtime, it also includes some configuration parameters (such as environment variables) prepared for runtime. Images do not contain any dynamic data, and their contents are not changed after they are built.

In other words, every time a house is conjured up, the house is the same, but daily necessities and the like are ignored. Whoever lives is responsible for the purchase.

Each mirror image can conjure a kind of house. Well, I can have multiple mirrors!

That is to say, I built a European-style villa and generated a mirror image. Another buddy may have built a Chinese courtyard house, which also generated a mirror image. There is also a buddy who built an African thatched house, which also generated a mirror image. . .

In this way, we can exchange mirror images, you use mine, I use yours, wouldn't it be great?

Ever since, it became a large public warehouse.

It is the Docker Registry service (similar to a warehouse administrator) that is responsible for managing the Docker image .

Not any mirror created by anyone is legal. What if someone builds a problematic house?

Therefore, the Docker Registry service is very strict in the management of images.

The most commonly used Registry public service is the official Docker Hub , which is also the default Registry and has a large number of high-quality official images.

Well, after talking about Docker, let's turn our attention to K8S.

Just when the Docker container technology was in full swing, everyone found that if you want to apply Docker to specific business implementations, there are difficulties—arrangement, management, scheduling and other aspects are not easy. Therefore, people urgently need a management system for more advanced and flexible management of Docker and containers.

At this time, K8S appeared.

K8S is a container-based cluster management platform. Its full name is kubernetes.

The word Kubernetes comes from Greek and means helmsman or navigator. K8S is its abbreviation, replacing the 8 characters of "ubernete" with the word "8".

Unlike Docker, the creator of K8S is Google , a well-known industry giant .

However, K8S is not a completely new invention. Its predecessor is the Borg system that Google has been working on for more than ten years .

K8S was officially announced by Google in June 2014 and announced as open source.

In July of the same year, companies such as Microsoft, Red Hat, IBM, Docker, CoreOS, Mesosphere, and Saltstack joined K8S one after another.

In the following year, VMware, HP, Intel and other companies also joined in.

In July 2015, Google officially joined the OpenStack Foundation. At the same time, Kuberentes v1.0 was officially released.

Currently, the version of kubernetes has been developed to V1.13.

The architecture of K8S is a little bit complicated, let's take a brief look.

A K8S system is usually called a K8S cluster (Cluster) .

This cluster mainly consists of two parts:

  • A Master node (master node)

  • A group of Node nodes (computing nodes)

It is clear at a glance: the Master node is mainly responsible for management and control. The Node node is a workload node, which contains specific containers.

Take a closer look at these two nodes.

The first is the Master node.

The Master node includes API Server, Scheduler, Controller manager, etcd.

API Server is the external interface of the entire system, which is called by clients and other components, which is equivalent to a "business hall".

The Scheduler is responsible for scheduling the resources within the cluster, which is equivalent to a "scheduling room".

Controller manager is responsible for managing the controller, which is equivalent to the "big manager".

Then there is the Node node .

Node nodes include Docker, kubelet, kube-proxy, Fluentd, kube-dns (optional), and Pod .

Pod is the most basic operating unit of Kubernetes. A Pod represents a process running in the cluster, which encapsulates one or more closely related containers. In addition to Pod, K8S also has a concept of Service. A Service can be regarded as the external access interface of a group of Pods that provide the same service. This paragraph is not easy to understand, skip it.

Docker, needless to say, creates containers.

Kubelet is mainly responsible for monitoring the Pod assigned to its Node, including creation, modification, monitoring, deletion, etc.

Kube-proxy is mainly responsible for providing proxy for Pod objects.

Fluentd is mainly responsible for log collection, storage and query.

Are you a little confused? Alas, it's really hard to explain clearly in a few words, so go ahead and skip it.

Both Docker and K8S have been introduced, but the article is not over.

The next part is written for core network engineers and even all communication engineers .

From 1G decades ago, to 4G now, to 5G in the future, mobile communication has undergone earth-shaking changes, and so has the core network.

However, if you take a closer look at these changes, you will find that the so-called core network has not changed in essence, it is nothing more than a lot of servers. Different core network elements are different servers and different computing nodes.

What has changed is the shape and interface of these "servers": the shape has changed from a single board in a cabinet to a blade in a cabinet, and from a blade in a cabinet to an X86 general-purpose blade server; the interface has changed from a trunk cable to a network cable, and from a network cable to a into optical fiber.

Even if it changes, it is still a server, a computing node, and a CPU.

Since it is a server, it is bound to embark on the road of virtualization just like IT cloud computing. After all, virtualization has too many advantages, such as low cost, high utilization, full flexibility, dynamic scheduling, etc. mentioned above.

A few years ago, everyone thought that the virtual machine was the ultimate form of the core network. At present, it seems that containerization is more likely . In recent years, NFV (Network Element Function Virtualization), which is often mentioned, may also be changed to NFC (Network Element Function Containerization).

Taking VoLTE as an example, if the previous 2G/3G method is used, a large number of special equipment are required to act as different network elements of EPC and IMS.

Network elements related to VoLTE

After adopting containers, it is likely that only one server is needed to create more than a dozen containers, and use different containers to run service programs of different network elements.

These containers can be created and destroyed at any time. It can also become larger, smaller, stronger, and weaker at will without stopping the machine, and dynamically balance between performance and power consumption.

Simply perfect!

In the 5G era, the core network adopts a microservice architecture, which is also a perfect match with containers—a monolithic architecture (Monolithic) becomes a microservice architecture (Microservices), which is equivalent to an all-rounder becoming N specialized ones. Each specialization is assigned to an isolated container, giving maximum flexibility.

Fine division of labor

According to this development trend, in the mobile communication system, except for the antenna, the remaining parts may be virtualized. The core network is the first, but not the last. The core network after virtualization should be classified as IT rather than communication. The function of the core network is just an ordinary software function in the container.

As for the core network engineers present here, congratulations, you are about to successfully transform!

Article source: https://my.oschina.net/jamesview/blog/2994112

Guess you like

Origin blog.csdn.net/lsyou_2000/article/details/105115903