Full analysis of basic knowledge of Docker

​Docker
is an open source containerization platform that allows developers to build, package, run and publish applications in containers, so as to achieve rapid deployment and portability of applications. Docker packages applications and dependencies in a lightweight portable container, which can run on any platform and will not be affected by environmental changes, making it very convenient and convenient to deploy applications in different environments. efficient. Docker also provides a series of tools and services, such as Docker Compose and Docker Swarm, etc., which can better support the development, testing and deployment of containerized applications.

1. The basic concept of Docker

Docker is an open source project based on container technology, which can package applications and dependencies into a container, and then run it anywhere without worrying about environment changes and configuration issues. The basic concepts of Docker include: Docker image, Docker container, Docker warehouse and Dockerfile.

  1. A Docker image is a read-only file system that contains everything needed to run an application, such as an operating system, software packages, application code, configuration files, and more. The core of the Docker image is a hierarchical storage structure, and each image layer is read-only and can be reused. This means that when building a Docker image, only new layers need to be added, and there is no need to re-copy the original layers, which greatly reduces the size of the image and the construction time.
  2. A Docker container is a running instance of a Docker image and an independent, lightweight application runtime environment. Each container has its own file system, network configuration and process space, which can isolate applications and dependencies, so that different applications do not interfere with each other. Docker containers can be created, started, stopped, and deleted quickly, are highly portable and flexible, and can run locally, in the cloud, or on edge devices.
  3. The Docker warehouse is a collection and management center of Docker images, similar to the concept of a code warehouse. Docker warehouses are divided into public warehouses and private warehouses. Public warehouses include Docker Hub and some other third-party warehouses. You can obtain, share and download Docker images for free. A private warehouse is a self-built Docker image warehouse within an enterprise, which can be used to manage, share and distribute internally used Docker images.
  4. Dockerfile is a script file for Docker image construction, which is used to automatically build Docker images. Dockerfile contains a series of commands and parameters required to build images, which can specify base images, add dependencies, set environment variables, run scripts and other operations.

Two, docker architecture diagram

insert image description here

The following components are included in the Docker architecture diagram:

  • Docker Client: A command-line tool or UI interface for interacting with Docker Daemon, which can communicate with Docker Daemon through REST API or UNIXSocket.
  • Docker Daemon: The core component of Docker, the daemon process, is responsible for managing local Docker images, containers, networks and other resources, and communicating with remote warehouses such as DockerHub.
  • Docker Registry: A warehouse that stores Docker images, including DockerHub, a public warehouse officially provided by Docker, and private warehouses built by users themselves.
  • Docker Image: Docker image, a read-only file containing application code, runtime environment, system tools, system libraries, etc.
  • DockerContainer: A Docker container is an instance created from a Docker image, and contains a readable and writable file system including applications, runtime environments, system tools, and system libraries.
  • Docker Network: Docker network provides network communication capabilities for Docker containers. Containers can connect to other containers, hosts, external services, etc. through the network.
  • Docker Volume: Docker volume provides data persistence capabilities for Docker containers. Containers can save data to local disks, cloud storage, etc. through volumes.

Three, docker composition

(1) Docker Engine

The Docker engine is the core component of Docker and is used to build and run Docker containers. It contains a daemon that creates and manages Docker containers on the host machine. Docker Engine uses a client-server model for interaction.

(2) Docker daemon process

The Docker daemon is a background process responsible for managing operations such as creating, starting, stopping, and deleting Docker containers, and it also provides a series of RESTful APIs for clients to call.
The Docker daemon manages containers by interacting with the Linux kernel's API. The core technology is the Linux container technology, which is a lightweight virtualization technology provided by the Linux kernel, which can isolate applications and their operating environments to achieve an isolation effect similar to a virtual machine, but lighter than a virtual machine level, multiple containers can be run on the same physical machine.

The working principle of the Docker daemon can be briefly summarized as the following steps:

  • The Docker client sends instructions to the Docker daemon process through the Docker API, such as creating a container, starting a container, stopping a container, and so on.
  • The Docker daemon creates or manages containers according to instructions, isolates applications and their operating environments through Linux container technology, limits the resources used by containers through Cgroups, and implements isolation between containers through Namespace.
  • The Docker daemon outputs the standard output and standard error output of the container to the log file of the container, which is convenient for viewing the running status and output information of the container.
  • The Docker daemon sends the running status and output information of the container to the Docker client through the Docker API.

(3) Docker client

The Docker client is a command-line tool used to communicate with the Docker daemon and send commands to manage Docker containers. Through the Docker client, we can create, run, stop, and delete containers on local or remote machines, and view container status, log information, etc. At the same time, the Docker client also supports operations such as building, uploading, and downloading Docker images.

(4) Docker Hub

Docker Hub is a public registry officially provided by Docker to store and share Docker images. Docker Hub contains a large number of official and community-created Docker images. Users can search, download and share Docker images through Docker Hub. Docker Hub is a cloud service platform officially provided by Docker. It can be used as a central repository for Docker images for developers to upload and download Docker images. It is an important part of the Docker ecosystem. Docker Hub provides two types of warehouses, public and private. You can create and use public warehouses for free, while private warehouses require payment. Using Docker Hub, developers can easily share and manage Docker images globally, thereby improving application portability and deployment efficiency.

What is the role of private library?
Although Docker Hub, as a public Docker image library, already contains many commonly used images, some private images may need to be used within the enterprise or in some specific scenarios. At this time, it is necessary to use a private Docker image library to store and manage these images. Private repositories can provide a more secure environment, allowing users to customize access permissions and control the version of the image to meet the specific needs of the organization. In addition, enterprises can also store images of some commercially confidential information in private repositories without uploading them to public repositories.

How to build a private library?

The steps to build a private library using Docker Registry software are as follows:

  • Installing Docker Registry can start the Registry container through the official Docker Registry image provided by Docker.
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2

This command will pull the Registry image from Docker Hub and start a container named "registry", using port 5000 to map port 5000 of the container.

  • To configure the Docker client, to use a private library, you need to configure the address of the private library in the Docker client. It can be realized by modifying the configuration file /etc/docker/daemon.json of the Docker client.
{
    
    
  "insecure-registries": ["my-registry.com:5000"]
}

The insecure-registries field in this configuration file specifies the addresses of private libraries that need to be trusted.

  • Pushing and pulling images To push an image to a private repository, you can use the docker push command, for example:
$ docker push my-registry.com:5000/my-image:latest
  • To pull an image from a private repository, you can use the docker pull command, for example:
$ docker pull my-registry.com:5000/my-image:latest

In this way, you can use a private library to manage your own mirror image.

How can I let myself first pull the image from the private library?
When using Docker, you can change the default URL for pulling images by modifying the configuration file of the Docker client.

The specific operation is as follows:
Edit the configuration file /etc/docker/daemon.json of the Docker client (Linux system), or select "Settings" -> "Docker Engine" -> "Advanced" -> " JSON" to edit. Add the following to the daemon.json file:

{
    
    
    "registry-mirrors": ["https://<your-registry-mirror>"]
}

Among them, is the address of your private mirror warehouse. Save the daemon.json file and restart the Docker service. After this configuration, every time the Docker client pulls the image, it will first pull from your private image repository, and if it is not found, it will pull from the public image repository.

How can we pull the image of the common library into our own private library?

You can use the docker pull command to pull the image of the public library to the local, then use the docker tag command to label it as a private library, and finally use the docker push command to push the image to the private library.

Specific steps:

  • Pull images from public repositories
docker pull <公共库镜像名>:<标签>
  • Label the image as a private library
docker tag <公共库镜像名>:<标签> <私有库地址>/<镜像名>:<标签>
  • Push the image to a private repository
docker push <私有库地址>/<镜像名>:<标签>

Can images commonly used in public libraries be pulled in batches?
Commonly used images in the public library can be pulled in batches by writing a Docker Compose file. The specific steps are as follows:
Create a new directory locally and create a file named docker-compose.yml in it. Edit the docker-compose.yml file, add the image and other configuration information that need to be pulled

For example:

version: '3.7'
services:
  nginx:
    image: nginx:latest
  mysql:
    image: mysql:latest
    environment:
      MYSQL_ROOT_PASSWORD: example
      MYSQL_DATABASE: example_db
    ports:
      - "3306:3306"
  redis:
    image: redis:latest

In this example, we will pull the latest versions of nginx, mysql and redis images, and map the mysql image to port 3306 of the host. Execute the docker-compose pull command to pull these images. After the execution is successful, the image will be downloaded to the local Docker engine, which can be viewed through the docker images command.

Note : This method is only suitable for pulling existing images in batches. If you need to customize the image, you also need to use Dockerfile to build the image.

(五)Docker Compose

Docker Compose is a tool for defining and running multiple Docker containers. It can
define multiple services through a configuration file, and these services can be started, stopped and managed as a whole. This makes it easy to manage multiple related Docker containers, and also facilitates the deployment of applications in different environments, such as development, testing, production environments, etc.
Using Docker Compose, you can use a YAML file to define information such as multiple containers, dependencies between containers, container networks, and data volumes. These containers can be started, stopped, restarted and managed with a single command.
Docker Compose can greatly simplify the management and deployment of Docker containers, especially in complex applications that contain multiple containers, using Docker Compose can make managing containers easier and more efficient.

Example usage scenarios:

For example, I have a gateway service (port: 73), an admin service (port: 7001), a sap service (port: 7002), a prodplan service (port: 7003), and a basicdata service (port: 7004), if not Docker
Compose, I will create images and containers for them separately, and then docker run them separately, so if I use Docker
Compose, what benefits will I get?

Using Docker Compose brings the following benefits:

  • Simplified management and maintenance: With Docker
    Compose, all microservice-related containers can be defined in one file, so that all containers can be started, stopped, restarted, deleted, etc. at one time, instead of operating one by one.
    Manage dependencies: Sometimes there are dependencies between different microservices, for example, a microservice needs to depend on the database of another microservice, and Docker
    Compose can automatically manage these dependencies by defining dependencies, thus simplifying management and deploy.

  • Simplify the construction of the development environment: Using Docker
    Compose can help developers quickly build a local development environment similar to the production environment, so that they can quickly iteratively develop and speed up the development cycle.

  • Repeatability: Using Docker Compose can ensure that the startup sequence and configuration of containers are the same in different environments, thereby increasing the repeatability of applications.

How to write yaml file? Do I still need to write the dockerfile separately?
Of course you need to write a Dockerfile for each service to define how to build the Docker image for that service. Then, in the Docker Compose file, you can use the build field to specify which Dockerfile each service uses to build their image.

Here's an example, assuming you have the following directory structure:

markdownCopy code
myapp/
├── docker-compose.yml
├── gateway/
│   ├── Dockerfile
│   └── ...
├── admin/
│   ├── Dockerfile
│   └── ...
├── sap/
│   ├── Dockerfile
│   └── ...
├── prodplan/
│   ├── Dockerfile
│   └── ...
└── basicdata/
    ├── Dockerfile
    └── ...

Then, you can write the following docker-compose.yml file:

version: '3'
services:
  gateway:
    build: ./gateway
    ports:
      - "73:73"
  admin:
    build: ./admin
    ports:
      - "7001:7001"
  sap:
    build: ./sap
    ports:
      - "7002:7002"
  prodplan:
    build: ./prodplan
    ports:
      - "7003:7003"
  basicdata:
    build: ./basicdata
    ports:
      - "7004:7004"

This will tell Docker Compose to build an image of each service, using the specified Dockerfile, and then run the container in each service, mapping the service's port to the host's port.

After the jar package is packaged uniformly, do I still need to build the docker image separately?
A corresponding Docker image needs to be built for each service to be referenced in the Docker Compose file. In general, you need to write a Dockerfile for each service to build the corresponding Docker image. In the Dockerfile, you need to specify the base image, copy application files, set the working directory, install dependencies, expose ports, and more. After writing the Dockerfile, you can use the Docker command to build the corresponding Docker image.

For example:

docker build -t your-image-name:tag your-dockerfile-directory

Among them, the -t parameter is used to specify the image name and tag, your-image-name is your image name, tag is your image tag, and your-dockerfile-directory is the directory where the Dockerfile file is located.
After the Dockerfile is built, you can use the docker-compose command to start the service. When you execute the docker-compose up command, Docker Compose will automatically create a network between services, start corresponding containers, and start services in sequence according to the dependencies defined in the Docker Compose file.

Do I still need the docker run command to start the service?
Using Docker Compose can manage multiple containers in a unified manner. You only need to define the containers that need to be started in the Docker Compose file, and then use the docker-compose up command to start all containers. No need to manually run the docker run command for each container anymore.

How to install docker-compose?
docker-compose can be installed in a variety of ways, the easiest way is to use the pip command to install.

The specific steps are as follows:
Confirm that Docker has been installed. If it has not been installed, you can refer to the official Docker documentation to install it. Install pip. If pip is not installed in the system, you can use the following command to install it:

$ sudo apt-get update
$ sudo apt-get install -y python3-pip
$ sudo pip3 install docker-compose

Note: Use the pip3 command to ensure that the Python 3 version of docker-compose is installed.

$ docker-compose --version

If the installation is successful, the version number of docker-compose will be displayed.

(六)Docker Registry

Docker Registry is a private Docker image registry for storing and sharing private Docker images. Users can deploy Docker Registry locally to share Docker images within the enterprise.

What is a mirror image?
In Docker, an image refers to a lightweight, executable package that contains everything needed to run an application, including code, runtime, libraries, environment variables, and configuration files. Images are the basis for running Docker containers. Think of an image as a read-only template for creating Docker containers. Using Docker images, you can quickly build, deploy, and run applications while ensuring the consistency of the application environment.

What is the difference between a mirror image and a normal jar package?
First of all, the JAR package is a packaging format for Java applications, and the image is a file system format that includes all components required to run applications such as operating systems, applications, and related dependent libraries. Therefore, the image contains not only the application itself, but also the operating system and all dependent libraries.
Secondly, the JAR package is oriented to a specific application and operating environment, while the image can run in any environment that supports the Docker runtime, because the image already contains the components and dependent libraries required by all applications. This allows images to run on different operating systems and different hardware architectures.
Finally, the JAR package usually only contains the code and resources of the application, while the image can contain all components such as the application, operating system and runtime environment, so it can be deployed and managed more conveniently.

Four. Summary

The above is a full analysis of the basic knowledge of Docker. We learned what Docker is and its core concepts, including Docker images, Docker containers, Dockerfiles, and Docker Compose. We also covered how to use Docker to manage applications and build containerized applications. Especially with this way of asking questions to share, I learned a lot. Before I asked questions, I always felt like I knew everything, but when I actually practiced, I found that I still didn’t know much about many problems.

Guess you like

Origin blog.csdn.net/whc888666/article/details/130426642