"Learn Docker Quickly" Creation and Management of Docker Images and Containers

introduction

Docker images and containers are one of the most popular technologies in cloud computing today. They provide a lightweight, flexible, and portable way to build, deliver, and run applications. By using Docker, developers can package an application and its dependencies into a self-contained, portable unit called a Docker image. These images can be deployed and run in any environment that supports Docker.

What is a Docker image?

A Docker image is a lightweight, independent executable software package that contains all the dependencies required to run an application, including code, runtime environment, library files, system tools and configuration files, etc. Docker images are the basis of Docker containers, and each Docker container is created from a Docker image.

Insert image description here

  • Isolation and portability: Docker images use containerization technology to isolate applications from the underlying operating system environment, allowing applications to run in the same way on different hosts. The image contains all the components and configuration required for the application, so it can be easily deployed and migrated in different environments.

  • Layered structure: Docker images adopt a layered structure. Each layer is a read-only file system that contains the files and settings required by the application. This layered structure makes the construction and update of the image more efficient. Existing layers can be reused, and only the changed parts need to be built and transmitted, which greatly reduces the size of the image.

  • Version control and sharing: Docker images can be managed using version tags, and different versions of images can be switched and rolled back as needed. Images can be shared and distributed through image warehouses such as Docker Hub. Developers can easily obtain and use images created by others to speed up the application development and deployment process.

  • Simplified deployment and scaling: Using Docker images simplifies the deployment and scaling process of applications. By defining an image that contains all dependencies and configurations, you can ensure consistent results across different environments. At the same time, the deployment and expansion of multiple containers can be easily managed through container orchestration tools (such as Docker Compose and Kubernetes).

Image acquisition and use

Image acquisition

  • Get it from Docker Hub: Docker Hub is a public Docker image repository that provides a large number of official and community-created images. You can obtain the image by running the docker pull command in the terminal, such as docker pull image_name:tag, where image_name is the image name and tag is the version label of the image.
  • Obtain from a private image warehouse: If you have your own private image warehouse, you can obtain the image according to the method provided by it, usually using the docker pull command.

Mirror usage

  • Run the container: After obtaining the image, you can use the docker run command to create and run a container, such as docker run image_name:tag. This command will create a container instance locally and run the application based on the specified image.
  • Container management: You can use the docker ps command to view the list of currently running containers, use the docker stop command to stop the running of the container, and use the docker start command to restart the stopped container.
  • Container configuration: You can use the docker exec command to execute commands in a running container, such as docker exec container_id command, where container_id is the ID or name of the container and command is the command to be executed.
  • Container deletion: Use the docker rm command to delete the stopped container, such as docker rm container_id.

What is a Docker container?

Docker container is one of the core concepts of Docker. It is a running instance created based on Docker image. Containers provide an isolated running environment in which applications can run independently and communicate with other containers and the host system.

Insert image description here

  • Isolation: Docker containers utilize Linux kernel containerization technologies, such as namespaces and control groups, to achieve isolation from the host system and other containers. Each container has its own file system, process space, network interface and other resources, so that applications will not interfere with each other when running in the container, nor will it affect the stability of the host system.

  • Lightweight: Compared with traditional virtual machines, Docker containers are more lightweight. Containers share the kernel of the host system and do not require additional operating system startup and resource overhead, so they start faster and occupy less system resources.

  • Portability: Docker containers are highly portable and can run in the same way on different hosts. Containers contain all the dependencies and configuration of an application so they can be easily deployed and migrated across development, test, and production environments.

  • Simplified management: Using Docker containers simplifies the application management and deployment process. Containers can be created and started through Docker images, and can be easily expanded, updated, and rolled back. At the same time, through container orchestration tools (such as Docker Compose and Kubernetes), the deployment and collaborative work of multiple containers can be managed.

Interaction between Docker container and host

Insert image description here

  • Port mapping: Docker containers can map the ports inside the container to the ports on the host through port mapping, thereby realizing communication between the host and the container. For example, you can use the -p option of the docker run command to specify port mapping, such as docker run -p 8080:80 image_name, where 8080 is the port on the host and 80 is the port inside the container.

  • Shared data volumes: Docker containers can share the file system inside the container with the file system on the host through shared data volumes, thereby realizing data sharing between the host and the container. For example, you can use the -v option of the docker run command to specify the data volume, such as docker run -v /host/path:/container/path image_name, where /host/path is the path on the host and /container/path is inside the container path of.

  • Environment variables: Docker containers can pass the configuration information inside the container to the host or other containers through environment variables. For example, you can use the -e option of the docker run command to set the environment variable, such as docker run -e "ENV_VAR=value" image_name, where ENV_VAR is the environment variable name and value is the value of the environment variable.

  • Docker API: Docker containers can interact with the host or other containers through the Docker API. Docker API is a set of RESTful APIs that can manage Docker containers, images, networks and other resources through HTTP requests. For example, you can use a Docker client or third-party tool to call the Docker API to interact with the container.

Create an image based on Dockerfile

  1. Write a Dockerfile: A Dockerfile is a text file used to define the image building process. In the Dockerfile, you can specify the base image, install software packages, configure environment variables, copy files and other operations.

  2. Build the image: Use docker buildthe command to build the image. The command will automatically read the Dockerfile file in the current directory and build it according to the instructions in the file. For example, you can use docker build -t image_name .the command to build an image, where image_nameis the image name and . represents the current directory.

  3. Run the container: Use docker runthe command to run the container. The command will automatically find the specified image from the local image library and start the container. For example, you can use docker run -d --name container_name image_namethe command to start a container, where the -d option means running in background mode and the -name option means specifying the container name.

When creating an image based on a Dockerfile, you can use the following instructions to define the image building process:

  1. FROM: Specify the base image, for example, FROM ubuntu:latestit means based on the latest version of Ubuntu image.

  2. RUN: Specifies the command to run in the image, for example, RUN apt-get update && apt-get install -y nginxupdating software packages and installing nginx.

  3. COPY/ADD: Specifies to copy local files to the image. For example, COPY app.py /app/it means to copy local app.pyfiles to the /app/ directory in the image.

  4. ENV: Specify the environment variable, for example, ENV PORT=80set the environment variable PORT to 80.

  5. WORKDIR: Specify the working directory, for example, WORKDIR /appset the working directory to /app.

  6. CMD: Specifies the command to be executed after the container is started. For example, CMD ["python", "app.py"]it means to execute the python app.py command after the container is started.

Create a container based on an image

  1. Use the docker run command: This is the most common way to create a container. You can use the docker run command to specify the image name to use and other options, such as port mapping, environment variable settings, etc. For example, the following command will create a container based on the ubuntu image:
docker run -it ubuntu
  1. Using Docker Compose: Docker Compose is a tool for defining and running multiple container applications. Create and manage multiple containers by writing a YAML file to describe the application's services and configuration, and then using the docker-compose up command. Docker Compose can easily define dependencies between containers, network configuration, etc. For example, here is an example of a simple Docker Compose file:

version: '3'
services:
  web:
    image: nginx
  1. Use container orchestration tools (such as Kubernetes): For large-scale containerized applications, container orchestration tools are often used to manage and orchestrate multiple containers. Kubernetes is one of the most popular container orchestration tools currently. It provides rich functions to manage the creation, scheduling, expansion and monitoring of containers. By defining resource objects such as Pod, Deployment, and Service, you can create and manage containers in a Kubernetes cluster. For example, here is a simple Kubernetes Deployment example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx

Summarize

With a deep understanding of the creation and management of Docker images and containers, you will be better able to leverage Docker technology to accelerate application development, deployment, and operation. Whether you are quickly building applications in a development environment or achieving efficient container deployment in a production environment, Docker images and containers will become your indispensable tools.

Let’s start exploring the wonderful world of Docker images and containers!

Guess you like

Origin blog.csdn.net/wml_JavaKill/article/details/134011869