My first acquaintance with docker (2)

Review:

In the last article, I shared with you what is docker? A brief introduction to the concept of docker is to show everyone what docker does? Before learning new knowledge, let’s review:

Docker is an open source application container engine that allows developers to package their applications and dependent packages into a portable container, and then publish it to any popular Linux machine or Windows machine. It can also be virtualized. The container is completely Using the sandbox mechanism, there will be no interfaces between each other. A complete Docker consists of the following parts: 1. DockerClient client 2. Docker Daemon daemon 3. Docker Image mirror 4. DockerContainer container.

Docker uses a client-server (C/S) architecture model and uses remote APIs to manage and create Docker containers . Docker containers are created through Docker images. The relationship between containers and mirrors is similar to objects and classes in object-oriented programming.

Docker is not suitable for all application scenarios. Docker can only virtualize Linux-based services. Windows Azure services can run Docker instances, but so far, Windows services cannot be virtualized.

Docker image (Images)

Docker image is a template used to create Docker containers, such as Ubuntu system.

Docker container (Container)

A container is an application or a group of applications that run independently, and is an entity that mirrors the runtime.

Docker client (Client)

The Docker client uses the Docker SDK ( https://docs.docker.com/develop/sdk/ ) to communicate with the Docker daemon through the command line or other tools .

Docker host (Host)

A physical or virtual machine is used to execute the Docker daemon and container.

Docker Registry

The Docker warehouse is used to store images, which can be understood as a code warehouse in code control.

Docker Hub ( https://hub.docker.com ) provides a huge collection of images for use.

A Docker Registry can contain multiple repositories; each repository can contain multiple tags; each tag corresponds to a mirror.

Usually, a warehouse will contain images of different versions of the same software, and tags are often used to correspond to each version of the software. We can use the format of <warehouse name>:<tag> to specify which version of the software is the mirror. If no label is given, latest will be   used as the default label.

Docker Machine

Docker Machine is a command line tool that simplifies the installation of Docker. You can install Docker on the corresponding platforms through a simple command line, such as VirtualBox, Digital Ocean, and Microsoft Azure.

The link to the previous article about docker is as follows: my first acquaintance with Docker

 

Know the new:

 

1. Problems solved by Docker

The rapid development of cloud computing, big data, and mobile technology, combined with the continuous changes in business needs of enterprises, has led to the need to change the enterprise architecture at any time to suit business needs and keep up with the pace of technological updates. There is no doubt that these heavy burdens will be placed on enterprise developers; how to efficiently coordinate between teams, quickly deliver products, quickly deploy applications, and meet enterprise business needs are problems that developers urgently need to solve. Docker technology happens to help developers solve these problems.

In order to solve the collaborative relationship between developers and operation and maintenance personnel and speed up application delivery , more and more companies have introduced the concept of DevOps. However, in the traditional development process, development, testing, and operation and maintenance are three independently operating teams. The communication between the teams is not smooth, and conflicts between development and operation and maintenance occur from time to time, resulting in low collaboration efficiency and delayed product delivery. The business operation of the enterprise. Docker technology packages and delivers applications in containers so that applications can be shared among different teams, and applications can be deployed in any environment by mirroring. This avoids the emergence of collaboration problems between teams and becomes an important tool for enterprises to achieve DevOps goals. The Docker technology delivered in the form of containers supports continuous development and iteration, which greatly improves the speed of product development and delivery.

In addition, unlike the virtual machine that virtualizes the underlying device through the Hypervisor, Docker is directly transplanted to the Linux kernel, and the underlying device is virtually isolated by running the Linux process, so that the loss of system performance is much lower than that of the virtual machine, almost ignore. At the same time, the start and stop of Docker application containers is very efficient, can support the horizontal expansion of large-scale distributed systems, and truly bring good news to enterprise development.

As Liu Yankai, the chief expert of China's HP cloud computing integrated cloud technology, said: "The development of any technology and its popularity are because it can solve the problems that plague people." Docker is just such a technology. Although Docker's problem-solving ability is very strong, there are not many practical applications in the enterprise. So what is the problem that hinders the practice of Docker in the enterprise?

Although the Docker technology is developing rapidly, the technology is not mature enough, and there are still limitations in terms of flexible storage support, network overhead and compatibility. This is one of the main reasons why Docker has not been widely used by enterprises. Another reason is whether the corporate culture is consistent with the DevOps movement. Only when companies support DevOps can they maximize the value of Docker. The last reason is the security issue. Docker's isolation of the Linux layer needs to be improved before it can be further recognized by the enterprise.

Two, application examples

There is also the docker mentioned above so powerful, so what can it do? Are there any concrete examples? Have! ! ! The typical scenario of docker is mentioned on the docker website:

  • Automating the packaging and deployment of applications

  • Creation of lightweight, private PAAS environments

  • Automated testing and continuous integration/deployment (to achieve automated testing and continuous integration/deployment)

  • Deploying and scaling web apps, databases and backend services

                                                                Give a "chestnut"

Sandbox

As a sandbox, it is probably the most basic idea of ​​a container-a lightweight isolation mechanism, rapid reconstruction and destruction, and less resources. Using docker to simulate distributed software deployment and debugging in the developer's single-machine environment is fast and good.

At the same time, the version control and image mechanism and remote image management provided by docker can build a distributed development environment similar to git. You can see that the packer used to build multi-platform images and the vagrant of the same author have tried this. The author will introduce these two exquisite and small tools from the same geek in a follow-up blog.

Docker is not just an artifact in the hands of DevOps personnel, every developer should learn how to use Docker. 

PaaS

dotcloud, heroku, and cloudfoundry all try to isolate the runtime and service provided to users through containers, but dotcloud uses docker, heroku uses LXC, and cloudfoundry uses their own cgroup-based warden. It is a common practice to provide users with PaaS services based on a lightweight isolation mechanism-PaaS does not provide users with OS but runtime+service, so the OS-level isolation mechanism is sufficient to shield users from the details. And many analysis articles of docker mentioned that "the "PaaS" cloud that can run any application" only shows from the perspective of image that docker can realize the packaging of user apps and the reuse of standard service images by building images, rather than common buildpacks. The way.

Because of the understanding of Cloud Foundry and docker, let's talk about the author's understanding of PaaS. PaaS's so-called platform has always been regarded as a set of multi-language runtime and a set of commonly used middleware, providing these two things

It can be considered as a PaaS that satisfies the demand. However, PaaS has high requirements for the applications that can be deployed on it:

  • The operating environment should be simple-although the buildpack is used to solve similar problems, it is still not ideal

  • Use service as much as possible-commonly used mysql, apache can understand, but if you want to use service like log, let users access the PaaS platform, making it difficult for users to maintain

  • Use the "platform" as much as possible-it is difficult to build the actual environment running on the target PaaS in a stand-alone environment, and the development and testing work cannot be separated from the "platform"

  • Lack of customization-limited middleware options, difficult to tune and debug.

In summary, the applications deployed on PaaS are almost impossible to migrate from the old platform to the above, and it is also difficult for new applications to enter the in-depth work of parameter tuning. Personal understanding is still suitable for rapid prototyping and short-term application attempts.

However, docker does realize the control and management of the user's operating environment from another angle (similar to IaaS+orchestration tools). However, based on the lightweight LXC mechanism, it is indeed an amazing attempt.

The author also believes that IaaS + flexible orchestration tools (in-depth management at the app level such as bosh) is the best way to deliver the user environment.

PaaS based on Docker has also begun to appear in China. On March 11, 2015, Skylark’s Alauda cloud platform officially opened its internal beta, providing external PaaS services based on Docker.

 

 

                                               expand

A term "cgroup" was mentioned in the previous article. Cgroups implements resource quota and measurement. The use of cgroups is very simple and provides a file-like interface. Create a new group by creating a folder in the /cgroup directory, create a task file in this folder, and write the pid to the file to realize the process Resource control. For specific resource configuration options, you can create a new subsystem in this folder. {Subsystem prefix}.{resource item} is a typical configuration method. For example, memory.usage_in_bytes defines a memory limit option for the group in the subsystem memory. In addition, the subsystems in cgroups can be combined at will. A subsystem can be in different groups, or a group can contain multiple subsystems-that is, one subsystem.

Because it did not introduce how it realizes the configuration of resources before, let me add it here:

cpu  : In cgroups, CPU capabilities cannot be defined like hardware virtualization solutions, but the priority of CPU rotation can be defined. Therefore, processes with higher CPU priority are more likely to receive CPU operations.

By writing the parameters to cpu.shares, you can define the CPU priority of the cgroup-here is a relative weight, not an absolute value. Of course, there are other configurable items in the subsystem of cpu, which are described in detail in the manual.

cpusets  : cpusets defines how many CPUs can be used by this group, or which CPUs can be used by this group. In some scenarios, single CPU binding can prevent cache switching between multiple cores, thereby improving efficiency

memory  : memory-related restrictions

blkio  : statistics and limits related to block IO, byte/operation statistics and limits (IOPS, etc.), read and write speed limits, etc., but the main statistics here are synchronous IO

net_clscpuacct  ,  Devices  ,  Freezer  and other items can be managed.

 

limitation

Docker is essentially an add-on system. It is possible to build an application using different layers of the file system. Each component is added to the previously created components, it can be more sensible than as a file system. The layered architecture brings another efficiency improvement. When you rebuild a changed Docker image, you do not need to rebuild the entire Docker image, only the changed part.

Perhaps more importantly, Docker is designed for elastic computing. Each Docker instance has a limited operating life cycle, and the number of instances increases or decreases according to demand. In a properly managed system, these instances are born equal, and each die out when they are no longer needed.

In view of the deficiencies in the Docker environment, it means that the following issues need to be considered before starting to deploy Docker. First, Docker instances are stateless. This means that they should not carry any transaction data, and all data should be stored in the database server .

Second, developing a Docker instance is not as simple as creating a virtual machine, adding applications and then cloning. To successfully create and use Docker infrastructure, administrators need to have a comprehensive understanding of all aspects of system management, including Linux management, orchestration, and configuration tools such as Puppet, Chef, and Salt. These tools are born based on command lines and scripts. 

 

Today, I will talk about docker temporarily. The inn is closed for one day!

Xiao Er, close the door and go to sleep

Guess you like

Origin blog.csdn.net/Gao068465/article/details/115341918