8 minutes Getting K8s | explain basic concepts container

Author | Ali Baba senior development engineer Fu Wei

A container with mirror

What is a container?

Before introduce the concept of specific container, first briefly recall how the operating system is to manage the process.

First of all, when we log in to the operating system, you can see a wide variety of processes by ps and other operations, these processes including the application process and the system comes with service users. So, these processes have what kind of features?

  • First, these processes can see each other, communicate with each other;
  • Second, they use the same file system, can read and write to the same file;
  • Third, the process will use the same system resources.

This three features will bring the problem?

  • Because these processes can see each other and perform interprocess communication, advanced permissions can attack other processes;
  • Because they use the same file system, and therefore two problems: These processes can be CRUD For existing data, process high-access data of other processes might be deleted, destroyed other processes normal operation; in addition, the dependence between the process and the process there may be a conflict, it will result to the operation and maintenance to bring a lot of pressure;
  • Because these processes are using the same host machine resources, there may be problems between the application resource preemption when an application needs to consume large amounts of CPU and memory resources, it could undermine the operation of other applications, other applications can not lead normally provide services.

For the above three questions, how to provide an independent operating environment it is the process?

  • For use with different processes in terms of a file system problem caused, Linux and Unix operating systems can be turned into a subdirectory root directory by chroot system call, to view the level of isolation; the process with the help of chroot can have a separate file systems, such as file system for CRUD will not affect other processes;
  • Because the process between the visible and may communicate with each other using a process technology to achieve Namespace isolation in view of resources. With the help of chroot and Namespace, the process can be run in a standalone environment of;
  • But in a separate environment, or the same operating system resources used by the process, the process may erode some of the resources of the entire system. To reduce the impact of process between each other, you can limit their resource usage by Cgroup, setting the amount of CPU and memory it can use.

So, how we should define such a collection of processes it?

In fact, the container is a view of the isolation, the resources may be limited, the process of collection of separate file systems. The so-called "View isolation" is to be able to see part of the process and have a separate host name, etc.; control resource usage can be limited to the memory size and the number of CPU or the like. Container is a set of processes, other resources will isolate the system, with its own independent resource view.

Container having an independent file system, since the resources of the system, so the system does not need to have a separate file related to the kernel code or tools, we only need to provide a container binary files, configuration files, and can rely on . As long as the container is running a collection of files needed to be able to have, then the container can be up and running.

What is a mirror?

In summary, all the documents we will need these containers runtime container collection is called mirroring.

So, generally through what kind of way to build the mirror it? Under normal circumstances, we will adopt Dockerfile to build a mirror, because Dockerfile provides a very convenient syntactic sugar can help us a good description of each step of construction. Of course, each step will be to build on the existing file system operations, which would bring the contents of the file system changes, we will call these changes changeset. When we build step changes arising in turn applied to an empty folder, you can get a complete image.
 
changeset layering and multiplexing features can bring several advantages:

  • First, it is possible to improve the distribution efficiency, simple Imagine, for a large mirror, if it is split into individual pieces can be improved distribution efficiency of the mirror, since the mirror after the split data can be downloaded in parallel;
  • Second, because these data are shared with each other, which means when the local store contains some data when there is no local need only to download data to, give a simple example is golang mirror image is constructed based on alpine and when a local alpine having been mirror image when downloaded golang portion can not simply download a local alpine mirror;
  • Third, because the mirrored data is shared, so you can save a lot of disk space, simply imagine, when the local store has mirrored the alpine and golang mirror, no ability before reuse, alpine mirror with a 5M size, have mirrored golang size 300M, 305M and therefore will take up space; and when the have a multiplexing capability, only 300M space can be.

How to build a mirror?

Dockerfile shown below applies to golang describes how to build the application.
image

as the picture shows:

  1. FROM rows represent the following construction steps based on what image constructed, as previously mentioned, the mirror can be multiplexed;
  2. The next line indicates WORKDIR will build steps are carried out in which a corresponding specific directory, which is similar to the role of the CD inside Shell;
  3. Row represents the COPY can copy a file on the host image into a vessel;
  4. RUN line indicates corresponding action is performed within a specific filesystem. When we finished running, you can get an application up;
  5. CMD line indicates the default program name when using a mirror.

When have Dockerfile, you can build applications required by docker build command. Construction of the result is stored in the local, in general, will be completed at a mirror constructed packer or other isolation environments.

So how do these images run on a production environment or test environment? This time we need a transit station or central storage, which we call docker registry, which is mirrored warehouse, which is responsible for storing all the image data generated. We just need to be able to promote local mirror to the mirror repository by docker push, this way, we will be able to download a test environment data in the production environment or the corresponding down and run.

How to run container?

Run a container divided into three steps in general:

  • The first step: in the mirror image downloaded from the corresponding warehouse;
  • Step Two: After the image has been downloaded can be viewed by docker images locally mirrored here will give a complete list, we can select the image you want in the list;
  • The third step: after you select the image, it can be run by docker run this image to get the container you want, of course, you can get multiple containers by multiple runs. A mirror is equivalent to a template, a container is like running a specific instance, so it has a mirrored building, running everywhere characteristics.

summary

A brief review, the container is opened and the system isolated from the rest of the process to set the other parts here, including processes, network resources and file systems. The image is a collection of container all the necessary documents, which have a build, features running everywhere.
 

Second, the container life cycle

Lifecycle container runtime

The container is a group of processes has set the isolation characteristic, when using docker run will choose a mirror system to provide a separate file and assign the appropriate program. Run the program specified here called the initial process, the initial process starts when the vessel will also start, when the initial process exits, the container will also exit.

Therefore, it is considered life-cycle of the life cycle of container and initial process is the same. Of course, not only because such an initial process, initial process itself may create other sub-process or generated by the docker exec out operation and maintenance operations inside the container, also within the scope of the initial process management. When the initial process exits, all the child processes will also withdraw, this is to prevent the leakage of resources.
 
But this approach also has some problems, first of all application inside the program tend to be stateful, it may have some important data, quit after a container is deleted, the data will be lost, and this party and for applications speech is not acceptable, it needs to be out of the container produced important data persistence down. Container can persist data directly to the specified directory, the directory is called a data volume.

There are some features of the data volume, which is a very significant volume of data life cycle is independent of the life cycle of container, that container is created, run, stop, and delete all data volumes and nothing to do, because it is a special directory, is used to help container persistence. In short, we will mount a data volume into the container, so that the vessel can write data to the appropriate directory, and it exits the container and does not result in loss of data.

Typically, data volume management mainly in two ways:

  • The first is by way bind directly to the host directory directly mounted to the container; this is relatively simple, but the operation and maintenance costs will bring, because it depends on the directory of the host, the need for all sink host unified management.
  • The second is the directory management to run the engine.

Third, the container architecture project

moby vessel engine architecture

moby is the most popular container management engine, moby
provides management with regard to container, mirror, on the network and Volume daemon will. moby daemon depends most important component is containerd, containerd management engine is running a container, independent of moby daemon, the container may be provided, the image related to management.

containerd ground floor there is containerd shim module, which is similar to a daemon, there are several reasons for this design:

  • First, containerd need to manage the life cycle of container, and the container could be created out of the running by a different container, it is necessary to provide a flexible plug-in management. The shim is developed for operation in different containers, so it can be detached from the containerd, managed by the form of plug-ins.
  • Secondly, because the realization of the shim plug so that it can be containerd dynamics take over. If you do not have this ability, when moby
    time daemon or containerd daemon quits unexpectedly, no one container management, then it will disappear, quit, which would affect the operation of the application.
  • Finally, because at any time might moby or containerd upgrade, if you do not provide a shim mechanism, then it can not be done in-place upgrade, the upgrade can not be done without affecting the business, so containerd
    shim is important because it enables dynamic takeover ability.

This section is only for courses conducted in moby a general introduction, will be introduced in subsequent lessons in detail.
 

Fourth, the vessel VS VM

The difference between the container and the VM

Use VM Hypervisor virtualization technology to simulate the CPU, memory and other hardware resources, so that you can build a Guest OS on the host, it is often said that the installation of a virtual machine.

Each Guest OS has a separate kernel, such as Ubuntu, CentOS or even Windows, etc., under such a Guest OS, every application is independent of each other, VM can provide a better isolation. But this isolation comes at a price, because of the need to put part of the computing resource virtualization, making it difficult to make full use of existing computing resources, and each Guest OS needs to take up a lot of disk space, such as Windows operating system installation requires 10 ~ 30G of disk space, Ubuntu also need 5 ~ 6G, while such a way to start very slowly. It is because of the shortcomings of virtual machine technology, gave birth to a container technology.
 
For purposes of container is in the process, so no Guest OS, only need a separate file system provides a collection of files they need to. All documents are process-level isolation, so startup time is faster than VM, and disk space required is less than VM. Of course, the process level of isolation is not quite that good, VM isolation lot worse compared.

Overall, the VM container and compared advantages and disadvantages, so the technology is toward strong container isolation direction.
 
This article summarizes ##

  • A container is a set of processes, has its own unique view perspective;
  • All image file container is required set, which includes a build, run everywhere characteristics;
  • Life cycle and life cycle initial container of the process is the same;
  • And compared to the VM container, have advantages and disadvantages, the container toward the strong isolation technology direction.

Guess you like

Origin www.cnblogs.com/alisystemsoftware/p/11514529.html