Docker and k8s

In 2010, several young people engaged in IT founded a company called "dotCloud" in San Francisco, USA.

This company mainly provides cloud computing technology services based on PaaS. Specifically, it is the container technology related to LXC.

LXC is Linux container virtualization technology (Linux container)

Later, dotCloud simplified and standardized its own container technology and named it Docker.

After the birth of Docker technology, it has not attracted the attention of the industry. And dotCloud, as a small entrepreneurial company, is also struggling under fierce competition.

Just when they were about to be unable to persist, the idea of ​​"open source" popped up in their minds.

What is "open source"? Open source means open source code. That is to open the source code of the original internal secret program to everyone, and then let everyone participate together and contribute code and opinions.

Open Source

Some software is open source from the beginning. There are also softwares that cannot be mixed, and the creators do not want to give up, so they choose to open source. If you can't support yourself, you can eat "Hundred Family Rice".

In March 2013, 28-year-old Solomon Hykes, one of the founders of dotCloud and the father of Docker, officially decided to open source the Docker project.

Solomon Hykes (just resigned from Docker this year)

If you don't open it, it's amazing.

More and more IT engineers discovered the advantages of Docker, and then flocked to join the Docker open source community.

The popularity of Docker has risen rapidly, and the speed is jaw-dropping.

In the month of open source, Docker version 0.1 was released. Every month thereafter, Docker will release a version. As of June 9, 2014, Docker version 1.0 was officially released.

At this time, Docker has become the most popular open source technology in the industry, not one of them. Even giants like Google, Microsoft, Amazon, and VMware have shown their full support for it.

After Docker became popular, dotCloud simply changed the company's name to Docker Inc..

Why are Docker and container technology so popular? To put it bluntly, it is because it is "light".

Before container technology, the industry’s influencers were virtual machines. Representatives of virtual machine technology are VMWare and OpenStack .

I believe many people have used virtual machines. A virtual machine is to install a software in your operating system, and then simulate one or more "sub-computers" through this software.

Virtual machine, similar to "child computer"

In the "sub-computer", you can run programs like normal computers, such as opening QQ. If you want, you can create several "sub-computers", all of which open QQ. The "sub-computer" and the "sub-computer" are isolated from each other and do not affect each other.

Virtual machines belong to virtualization technology. And container technology like Docker is also a virtualization technology, which belongs to lightweight virtualization .

Although a virtual machine can isolate many "sub-computers", it takes up more space and starts more slowly. Virtual machine software may also cost money (such as VMWare).

And container technology does not have these shortcomings. It does not need to virtualize the entire operating system, only a small-scale environment (similar to a "sandbox").

sandbox

It starts quickly, and can be completed in a few seconds. Moreover, it has high resource utilization (a host can run thousands of Docker containers at the same time). In addition, it occupies a very small space, a virtual machine generally requires a few GB to tens of GB of space, while a container only needs MB or even KB.

Comparison of containers and virtual machines

Because of this, container technology has been warmly welcomed and sought after, and has developed rapidly.

Let's take a look at Docker specifically.

Everyone needs to note that Docker itself is not a container . It is a tool for creating containers and an application container engine.

If you want to understand Docker, just look at its two slogans.

The first sentence is " Build, Ship and Run ".

In other words, "build, send, and run" is a three-pronged approach.

for example:

I came to a vacant lot and wanted to build a house, so I moved stones, chopped wood, drew blueprints, and worked on it, and finally built the house.

As a result, I lived for a while and wanted to move to another vacant lot. At this time, according to the previous method, I can only move stones, chop wood, draw drawings, and build houses again.

However, an old witch came and taught me a kind of magic.

This kind of magic can make a copy of the house I built, make it a "mirror", and put it in my backpack.

When I got to another vacant lot, I used this "mirror" to copy a house, put it over there, and move in with my bags.

how about it? Isn’t it amazing?

So, Docker's second slogan is: " Build Once, the Run Anywhere (build once, use everywhere) ."

The three core concepts of Docker technology are:

  • Mirror image (Image)

  • Container

  • Repository

In my example just now, the "mirror" in the package is the Docker image . And my backpack is the Docker warehouse . I am in the open space, and the house built by magic is a Docker container .

To put it bluntly, this Docker image is a special file system. In addition to providing programs, libraries, resources, configuration and other files required by the container runtime, it also contains some configuration parameters (such as environment variables) prepared for runtime. The image does not contain any dynamic data, and its content will not be changed after it is built.

In other words, every time a house is changed, the house is the same, but daily necessities and the like are ignored. Whoever lives is responsible for the purchase.

Every mirror image can transform into a kind of house. Well, I can have multiple mirrors!

In other words, I built a European-style villa and generated a mirror image. Another buddy may have built a Chinese courtyard house, which also created a mirror image. And buddy, built an African thatched house, which also generated a mirror image. . .

In this way, we can exchange mirror images. Wouldn’t it be great if you use mine and I use yours?

Ever since, it became a large public warehouse.

The Docker Registry service (similar to warehouse administrators) is responsible for the management of Docker images .

Not any mirror created by anyone is legal. What if someone builds a problematic house?

Therefore, the Docker Registry service is very strict in the management of images.

The most commonly used Registry public service is the official Docker Hub , which is also the default Registry and has a large number of high-quality official images.

Okay, after talking about Docker, let's turn our attention to K8S.

When the Docker container technology was in full swing, everyone found that if you want to apply Docker to specific business implementations, there are difficulties-not easy in all aspects of orchestration, management, and scheduling. Therefore, people urgently need a management system for more advanced and flexible management of Docker and containers.

At this moment, K8S appeared.

K8S is a container-based cluster management platform. Its full name is kubernetes.

The word Kubernetes comes from Greek, meaning helmsman or navigator. K8S is its abbreviation, replacing the 8 characters "ubernete" with "8".

Unlike Docker, the creator of K8S is a well-known industry giant- Google .

However, K8S is not a new invention. Its predecessor is the Borg system that Google has worked on for more than a decade .

K8S was officially announced and open sourced by Google in June 2014.

In July of the same year, companies such as Microsoft, Red Hat, IBM, Docker, CoreOS, Mesosphere and Saltstack joined K8S one after another.

In the following year, VMware, HP, Intel and other companies have also joined.

In July 2015, Google officially joined the OpenStack Foundation. At the same time, Kuberentes v1.0 was officially released.

Currently, the version of kubernetes has been developed to V1.13.

The architecture of K8S is a little bit complicated, let's take a look at it briefly.

A K8S system is usually called a K8S cluster (Cluster) .

This cluster mainly consists of two parts:

  • A master node (master node)

  • A group of Node nodes (computing nodes)

It is clear at a glance: the master node is mainly responsible for management and control. Node node is a workload node, which is a specific container.

Take a closer look at these two kinds of nodes.

The first is the Master node.

The master node includes API Server, Scheduler, Controller manager, etcd.

API Server is the external interface of the entire system for clients and other components to call, which is equivalent to a "business hall".

The Scheduler is responsible for scheduling resources within the cluster, which is equivalent to a "dispatch room".

The Controller manager is responsible for managing the controller, which is equivalent to the "master manager".

Then there is the Node node .

Node nodes include Docker, kubelet, kube-proxy, Fluentd, kube-dns (optional), and Pod .

Pod is the most basic operating unit of Kubernetes. A Pod represents a process running in the cluster, and it encapsulates one or more closely related containers. In addition to Pods, K8S also has the concept of Service. A Service can be regarded as a set of external access interfaces for Pods that provide the same service. This paragraph is not easy to understand, so skip it.

Docker, needless to say, creates containers.

Kubelet is mainly responsible for monitoring the Pod assigned to its Node, including creation, modification, monitoring, deletion, etc.

Kube-proxy is mainly responsible for providing a proxy for Pod objects.

Fluentd is mainly responsible for log collection, storage and query.

Isn't it a bit embarrassing? Alas, it's really hard to make it clear in a few words, so just skip it.

Both Docker and K8S have been introduced, but the article is not over.

The next part is written for core network engineers and even all communication engineers .

From 1G decades ago, to 4G today, and to 5G in the future, mobile communications have undergone earth-shaking changes, and so has the core network.

However, if you carefully observe these changes, you will find that the so-called core network has not changed in essence, it is nothing more than a lot of servers. Different core network elements are different servers and different computing nodes.

What has changed is the form and interface of these "servers": form, from cabinet single board to cabinet blade, from cabinet blade to X86 general-purpose blade server; interface, from trunk cable to network cable, from network cable to成optical fiber.

Even if it changes, it's the server, the computing node, and the CPU.

Since it is a server, it is bound to embark on the path of virtualization like IT cloud computing. After all, virtualization has too many advantages, such as the aforementioned low cost, high utilization, full flexibility, dynamic scheduling, and so on.

In the past few years, everyone thought that the virtual machine was the ultimate form of the core network. It seems that it is more likely to be containerized . NFV (Network Element Function Virtualization), which has been often said in recent years, may also be changed to NFC (Network Element Function Containerization).

Take VoLTE as an example. If you follow the previous 2G/3G method, a large number of dedicated equipment will be required to serve as different network elements for EPC and IMS.

VoLTE-related network elements

After adopting containers, it is likely that only one server is needed, a dozen containers are created, and different containers are used to run service programs of different network elements.

These containers can be created or destroyed at any time. It can also become larger, smaller, stronger, and weaker at will without stopping the machine, and dynamically balance between performance and power consumption.

Simply perfect!

In the 5G era, the core network adopts a microservice architecture, which is also a perfect match for containers-Monolithic architecture becomes Microservices architecture, which is equivalent to turning an all-rounder into N specialized types. Each specialized type is assigned to an isolated container, which gives the greatest degree of flexibility.

Refined division of labor

According to this development trend, in the mobile communication system, except for the antenna, the remaining parts are likely to be virtualized. The core network is the first, but not the last. The core network after virtualization should be classified as IT rather than communications. The function of the core network is just a common software function in the container.

As for the core network engineers present here, congratulations, we are about to make a successful transformation soon!

Guess you like

Origin blog.csdn.net/qq_30264689/article/details/102817944