Docker container learning [1]

1. Container overview

1.1 What is a container

The container is a sandbox technology, the main purpose is to run the application in it and isolate it from the outside world; and it is convenient for this sandbox to be transferred to other host machines. Essentially, it is a special process. Divide resources, files, devices, status, and configurations into an independent space through Namespace, Control groups, and chroot technologies.

The popular understanding is a box containing application software, which contains the dependent libraries and configurations required for the software to run. Developers can move this box to any machine without affecting the operation of the software inside.

1.2 Container principle

In order to isolate the container process from the outside world, the bottom layer of the container mainly uses namespaces, control groups and chroot

namespace:

  1. PID Namespace: Different containers are separated by the pid namespace, and different namespaces can have the same pid.
  2. Mount Namespace: mount allows processes in different namespaces to see different file structures, so the file directories seen by processes in different namespaces are isolated. In addition, the information in /proc/mounts for each container in the namespace only contains the current name of the mount point.
  3. IPC Namespace: The process interaction in the container still adopts the common process interaction method of Linux (interprocess communication -IPC), including semaphores, message queues, and shared memory.
  4. Network Namespace: Network isolation is realized through Net, and each Net has independent network devices, IP addresses, routing tables, and /proc/net directories. In this way, the network of each container can be isolated.
  5. UTS Namespace: UTS (UNIX Time-sharing System) allows each container to have an independent hostname and domain name, so that it can be regarded as an independent node on the network rather than a process on the host.
  6. User Namespace: Each container can have a different user and group id, which means that the user inside the container can be used to execute the program instead of the user on the host.

Control groups (Control groups):
Cgroups is a physical resource mechanism provided by the Linux kernel that can limit, record, and isolate process groups. Because the Namespace technology can only change the visual scope of the process, it cannot actually limit resources. Therefore, it is necessary to use Cgroup technology to limit the resources of the container to prevent a container from using up all the resources of the host machine and causing other containers to crash. In the /sys/fs/cgroup directory of Linux, there are subdirectories such as cpu, memory, devices, and net_cls. You can modify the corresponding configuration files as needed to set the maximum usage rate of a process ID for physical resources.

Change to root:
The meaning of root cutting is to change the location of the root directory referenced by a program when it is running, so that different containers can work in different virtual root directories, so that they do not directly affect each other.

1.3 Containers and virtual machines

A virtual machine usually includes an entire operating system and applications, and a real operating system runs inside. In essence, virtual machines are different operating systems installed on hardware virtualized by Hypervisor, and containers are different processes running on the host machine. From the perspective of user experience, virtual machines are heavyweight, occupying a lot of physical resources and taking a long time to start. Containers occupy less physical resources and start quickly. In contrast, virtual machines are more completely isolated, while containers are worse.
insert image description here

1.4 The Development History of Containers

insert image description here

2 Docker

When it comes to containers, everyone refers to docker by default, why not others? The reason is very simple, because docker is simple and convenient to use, and it solves the needs of most users. Other containers have more or less problems such as inconvenient packaging and poor compatibility. In the docker solution, not only the local application is packaged, but also the local environment (a part of the operating system) is packaged together, so that the local environment and the server environment are completely consistent, so that real one-time development can run anywhere.

Docker is an open source application container engine, based on the Go language and open source in compliance with the Apache2.0 protocol.

Docker allows developers to package their applications and dependencies into a lightweight, portable container, which can then be distributed to any popular Linux machine, and can also be virtualized.

The container is completely using the sandbox mechanism, and there will be no interface between them (similar to iPhone apps), and more importantly, the performance overhead of the container is extremely low.

Docker is a virtual environment container that can package your development environment, code, configuration files, etc. into this container, and publish and apply it to any platform. For example, if you use Python to develop the website background locally, after the development and testing are completed, you can package Python3 and its dependent packages, Flask and its various plug-ins, Mysql, Nginx, etc. into a container, and then deploy it to any place you want environment of

docker and nvidia-docker
docker is a container, and nvidia-docker is a plug-in that supports docker. For general web applications, nvidia-docker is not required.

2.1 Advantages of Docker

Docker is an open platform for developing, delivering and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure the same way you manage your applications. By leveraging Docker's approach to rapidly delivering, testing, and deploying code, you can drastically reduce the delay between writing code and running it in production

1. Deliver your applications quickly and consistently
Docker simplifies the development lifecycle by allowing developers to work in a standardized environment using local containers of the applications or services you provide.

Containers are well suited for continuous integration and continuous delivery (CI/CD) workflows, consider the following example scenarios:

Your developers write code locally and share their work with colleagues using Docker containers.
They use Docker to push their applications into the test environment and perform automated or manual testing.
When developers find bugs, they can fix them in the development environment and redeploy them to the test environment for testing and validation.
After testing is complete, pushing the fix to production is as easy as pushing an updated image to production.

2. Responsive deployment and scaling
Docker is a container-based platform that allows highly portable workloads. Docker containers can run on a developer's local machine, on a physical or virtual machine in a data center, on a cloud service, or in a hybrid environment.

Docker's portability and lightweight nature also make it easy for you to manage workloads dynamically and scale up or tear down applications and services in real time as business needs dictate.

2. Run more workloads on the same hardware
Docker is light and fast. It provides a viable, cost-effective, and efficient alternative to hypervisor-based virtual machines, so you can leverage more computing power to meet your business goals. Docker is ideal for high-density environments and small to medium deployments, where you can do more with less.

2.2 Containers and images

In the life cycle of docker, images and containers are the two most important parts. The image is a file, which is a read-only template, an independent file system, which contains the data required to run the container, and can be used to create a new container; while the container is a process created based on the image, and the process in the container depends on For the files in the image, the container has a write function, and the software and configuration inside can be rewritten as needed, and can be saved as a new image. If it is generated by the import method, it is a completely new image. If the new image generated by the commit method is used, there is an inheritance relationship between the new image and the original image.
insert image description here

2.3 Docker environment installation

https://blog.csdn.net/qq_38345468/article/details/110128659
docker installation

3 Getting started with Docker

Docker uses a common process:
download (pull) an image from a warehouse (usually DockerHub), Docker executes the run method to get a container, and the user performs various operations in the container. Docker executes the commit method to convert a container into an image. Docker uses commands such as login and push to push (push) the local image to the warehouse. This image can be used on other machines or servers to generate containers and then run corresponding applications.

Docker custom image
Docker entry operation

appendix:

  1. docker official website https://docs.docker.com/
  2. docker hub :https://hub.docker.com/

Guess you like

Origin blog.csdn.net/qq_41224270/article/details/128033646