Docker of container technology

The origins of container technology

Suppose your company is secretly developing the next "Today's Toutiao" APP, let's call it Tomorrow's Toutiao. Programmers build an environment from beginning to end and start writing code. , At this time, the test students began to build this environment from beginning to end . Programmers don’t have to worry about problems during the test process.

After the test students finished the test, they could finally go online. At this time, the operation and maintenance students had to build this environment from beginning to end again . It took a lot of effort to build the environment and start going online. Unfortunately, the online system crashed. Programmers with good psychological quality can show their acting skills again, "it can obviously run in other people's environment."

From the whole process, we can see that not only did we repeatedly build three sets of environments, but also forced programmers to switch to actors to waste their acting talents. This is a typical waste of time and efficiency. Smart programmers will never be satisfied with the status quo, so it’s time for programmers again. It's time to change the world, and container technology came into being.

Some students may say: "Wait, don't change the world yet. We have virtual machines. VMware is so easy to use. Let's set up a virtual machine environment first and then clone it for testing and operation and maintenance. ?”

Before there was container technology, this was indeed a good way, but this way is not so good yet.

Let’s do some popular science first. Now the underlying foundation of cloud computing is virtual machine technology. Cloud computing vendors buy a bunch of hardware and build a data center, and then use virtual machine technology to divide hardware resources. For example, 100 units can be divided. Virtual machines, so that they can be sold to many users.

You might be thinking why is this not a good idea?

Container technology vs virtual machine

We know that compared with a simple application program, the operating system is a very heavy and clumsy program , referred to as clumsy, how clumsy is it?

We know that the operating system needs to take up a lot of resources to run. Everyone must have a deep understanding of this. The newly installed system has not yet deployed anything. The pure operating system starts with a disk occupation of at least tens of gigabytes, and a few gigabytes of memory. start.

Suppose I have a machine with 16G memory and need to deploy three applications, then using virtual machine technology can be divided as follows:

Three virtual machines are started on this machine, and an application is deployed on each virtual machine. VM1 occupies 2G of memory, VM2 occupies 1G of memory, and VM3 occupies 4G of memory.

We can see that the virtual itself occupies a total of 7G of memory, so we have no way to divide more virtual machines to deploy more applications , but we deploy applications, and we use applications instead of operating systems .

Wouldn't it be nice if there was a technique that would allow us to avoid wasting memory on "useless" operating systems? This is the first problem, the main reason is that the operating system is too heavy.

There is another problem, that is the problem of startup time. We know that the restart of the operating system is very slow, because the operating system has to detect and load everything that should be detected from the beginning to the end. Minutes, so the operating system is still too dumb.

So is there a technology that allows us to obtain the benefits of virtual machines and overcome these shortcomings so as to achieve both in one fell swoop?

The answer is yes, this is container technology.

what is a container

The English word for the word container is container. In fact, container also means container. The container is definitely a remarkable invention in the history of commerce, which greatly reduces the cost of ocean trade and transportation. Let's look at the benefits of containers:

  • Containers are isolated from each other
  • long-term repeated use
  • Fast loading and unloading
  • Standard specifications, can be placed in the port and on the ship

Back to the container in the software, in fact, the concept of the container and the container is very similar.

A major purpose of modern software development is isolation. Applications run independently of each other without interfering with each other. This isolation is not easy to achieve. One of the solutions is the virtual machine technology mentioned above. Deployed in different virtual machines to achieve isolation.

But virtual machine technology has the various shortcomings mentioned above, so what about container technology?

Different from the isolation of virtual machines through the operating system, container technology only isolates the runtime environment of the application, but the same operating system can be shared between containers . The runtime environment here refers to the various libraries and configurations that the program depends on.

From the figure, we can see that the container is more lightweight and takes up less resources . Compared with the memory usage of the operating system which takes several G at every turn, the container technology only needs a few M space, so we can use a large amount of resources on the same specification hardware. Deploying containers is unmatched by virtual machines, and different from the operating system’s startup time of several minutes, containers start almost instantaneously. Container technology provides a more efficient way for packaging service stacks , so cool.

So how do we use containers? This is about docker.

Note that containers are a general technology, and docker is just one implementation.

what is docker

Docker is an open source project implemented in Go language, which allows us to create and use containers conveniently. Docker packages programs and all dependencies of programs into docker containers, so that your programs can have consistent performance in any environment. Here The dependence of program operation is that the container is like a container, and the operating system environment where the container is located is like a cargo ship or a port. The performance of the program is only related to the container (container), and has nothing to do with which cargo ship or which port (operating system) the container is placed on. relationship .

Therefore, we can see that docker can shield environmental differences, that is to say, as long as your program is packaged into docker, the behavior of the program will be consistent no matter what environment it is running in, and programmers can no longer display their talents . There will be "running in my environment" again , and the real realization of "build once, run everywhere".

In addition, another benefit of docker is rapid deployment , which is the most common application scenario for Internet companies. One reason is that the container starts very quickly, and the other reason is that as long as the program in a container is running correctly, you can be sure No matter how much is deployed in the production environment, it can run correctly.

how to use docker

There are several concepts in docker:

  • dockerfile
  • image
  • container

In fact, you can simply understand the image as an executable program, and the container is the running process.

Then writing a program requires source code, then "writing" an image requires a dockerfile, a dockerfile is the source code of an image, and docker is a "compiler".

Therefore, we only need to specify in the dockerfile which programs are needed and what configurations to rely on, and then hand the dockerfile to the "compiler" docker for "compilation", which is the docker build command. The generated executable program is image, and then you can After running this image, this is the docker run command, and after the image is running, it becomes the docker container.

The specific method of use will not be repeated here. You can refer to the official documentation of docker, where there are detailed explanations.

How docker works

In fact, docker uses a common CS architecture, which is the client-server model. The docker client is responsible for processing various commands entered by users, such as docker build and docker run. The real work is actually the server, which is the docker demon. It is worth noting Yes, docker client and docker demon can run on the same machine.

Next, we use a few commands to explain the workflow of docker:

1,docker build

When we finish writing the dockerfile and give it to docker to "compile", we use this command, then the client forwards it to the docker daemon after receiving the request, and then the docker daemon creates an "executable program" image based on the dockerfile.

2,docker run

After you have the "executable program" image, you can run the program. Next, use the command docker run. After receiving the command, the docker daemon finds the specific image, and then loads it into the memory to start execution. When the image is executed, it is called a container.

3,docker pull

In fact, docker build and docker run are the two core commands. If you can use these two commands, you can basically use docker, and the rest are some supplements.

So what does docker pull mean?

As we said before, the concept of image in docker is similar to "executable program". Where can we download applications written by others? Very simple, that is the APP Store, the application store. Similarly, since image is also an "executable program", is there a "Docker Image Store"? The answer is yes, this is Docker Hub, the official "app store" of docker, where you can download images written by others, so that you don't have to write dockerfile yourself.

The docker registry can be used to store various images, and the public warehouse for anyone to download images is the docker hub. So how to download the image from Docker Hub, that is the docker pull command here.

Therefore, the implementation of this command is also very simple, that is, the user sends the command through the docker client, and the docker daemon sends an image download request to the docker registry after receiving the command, and stores it locally after downloading, so that we can use the image.

Finally, let's take a look at the underlying implementation of docker.

The underlying implementation of docker

Docker provides several functions based on the Linux kernel:

  • NameSpace
    We know that PID, IPC, network and other resources in Linux are global, and the NameSpace mechanism is a resource isolation scheme. Under this mechanism, these resources are no longer global, but belong to a specific NameSpace. The resources under each NameSpace do not interfere with each other, which makes each NameSpace look like an independent operating system, but only the NameSpace is not enough.
  • Although control groups
    have NameSpace technology to achieve resource isolation, processes can still access system resources uncontrolled, such as CPU, memory, disk, network, etc. In order to control the access of processes to resources in containers, Docker uses control groups technology ( That is, cgroup). With cgroup, you can control the consumption of system resources by the process in the container. For example, you can limit the upper limit of memory used by a container, which CPUs it can run on, and so on.

With these two technologies, containers really look like standalone operating systems.

Summarize

Docker is a very popular technology at present, and many companies use it in production environments, but the underlying technology that docker relies on has actually appeared a long time ago, and now it is rejuvenated in the form of docker, and can solve the problems it faces very well , I hope this article can help you understand docker, and you are welcome to like it~~

Reference link: What is Docker (dry goods) bazyd

Guess you like

Origin blog.csdn.net/weixin_44330810/article/details/126479203