Docker Quick Start Learning Introductory Tutorial

Table of contents

1. The basic concept of docker

2. How to package and run an application?

3. How to modify the application in docker?

4. How to share the created image?

5. How to use the volumes name to store the data in the container? // data mount

6. Another mounting method: directory mounting

7. Realize mutual communication between containers

8. Use Docker Compose to simplify sharing operations between multiple containers


1. The basic concept of docker

What is a container?

        The official explanation is that a docker container is a sandboxed process on a machine that is isolated from all other processes on the host. So the container is just an isolated process in the operating system. The so-called containerization is actually just a syntactic sugar to deceive the operating system.

What is a container image?

        The dependent file of the container running is the image, and multiple containers can be created through the image.

2. How to package and run an application?

(1) Obtain the program source file

        Preparation: You need to install git, and use git to pull the remote code to the local . The following program is an official docker example:

git clone https://github.com/docker/getting-started.git

(2) Create a container image

        To create a container image, a Dockerfile is required . A Dockerfile is just a text-based file without a file extension. A Dockerfile contains scripts of instructions that Docker uses to create container images.

        Create Dockerfile: Create an empty Dockerfile in the downloaded getting-started project (note that the file does not have any file format)

        Fill in the content of the Dockerfile file as follows (there will be a special introduction to the preparation of the Dockerfile file, here you only need to know that you need to use the Dockerfile file to create a container image):

# syntax=docker/dockerfile:1
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000

        In the current directory of the Dockerfile file (under the app directory), create a container image through the docker build command:

docker build -t getting-started .
  •  -t represents the tags of the image, where the tag name is getting-started
  • . Indicates that the docker build command looks for the Dockerfile file from the current folder

(3) Start and run the container        

        After the image is successfully created, use the docker images command to view the created image in the system. After viewing the image, use the docker run command to run the created image file:

docker run -dp 3000:3000 getting-started

        -d means to run the container in the background, -p means to create port mapping between the container and the machine, port mapping is necessary, otherwise the program in the container cannot be accessed. // Some people also say that the container is a virtual machine, which seems to explain port mapping better

        Visit  http://localhost:3000/ and the following interface appears, indicating that the container has started successfully:

3. How to modify the application in docker?

        If we modify the application, we need to recreate the image file (repeat the process in the second section of the article) for the modification to take effect . It should be noted that to avoid port conflicts, the running container should be stopped first. Here are some commands for manipulating containers:

docker ps 可以获取到容器的id
docker stop <the-container-id>  通过id停止一个容器
docker rm <the-container-id>    容器停止后,删除一个容器
docker rm -f <the-container-id> -f 即force,表示强制删除

4. How to share the created image?

(1) Create a remote warehouse

        If you do not have a Docker ID, you can register a user on Docker Hub, and then use Docker Hub to create a remote repository. The steps to create a remote warehouse are described in detail in the official document (the document link address is at the end of the article), which is roughly similar to the process of git creating a remote warehouse and then pushing the code. 

(2) Push the image to the remote warehouse

        After pushing the image to the remote warehouse, others can download your image. Commonly used commands are as follows:

docker push YOUR-USER-NAME/getting-started:tagname // 推送镜像
docker tag getting-started YOUR-USER-NAME/getting-started  //给镜像打标签
docker push YOUR-USER-NAME/getting-started // 拉取镜像

5. How to use the volumes name to store the data in the container? // data mount

        Every time a container is started, the data updated by the previous container will be cleared. This is because each container has its own "separate space" for creating/updating/deleting files. None of these changes will be seen in another container, even if they are using the same image. // containers are isolated from each other

        The idea to solve the above problems: use volumes to store data.

        The specific file system path in the container can be mounted to the host through volumes . If a directory in the container is mounted, changes in that directory will also be seen on the host. If we mount the same directory when the container restarts, we will see the same files. // docker provides two mounting methods: data mounting and directory mounting

(1) Use the docker volume create command to create a volume , and todo-db is the name of the volume

docker volume create todo-db

(2) Use the volume name to mount , all changes under the /etc/todos path will be synchronized to the volume, and other containers using the same volume can also see the same information

// By default, the todo application stores its data in the /etc/todos/todo.db file in a SQLite database,

docker run -dp 3000:3000 --mount type=volume,src=todo-db,target=/etc/todos getting-started

        So, when we create a container volume ( volume ), where are the files in the container stored on the physical machine? Use the docker volume inspect command to view detailed information about container volumes :

$ docker volume inspect todo-db
[
    {
        "CreatedAt": "2023-02-07T01:34:40Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/todo-db/_data",
        "Name": "todo-db",
        "Options": {},
        "Scope": "local"
    }
]

The mountpoint is the exact path ( autogenerated ) where         the file is stored on disk , however, on most computers this path requires root access to access this directory from the host. // The windows system is also this path, this path cannot be directly accessed on the win system, so it will be more troublesome

6. Another mounting method: directory mounting

        Directory Mount: Allows to share a directory from the host filesystem into the container. The following are the differences between directory mounts and container volume mounts:

named volume ( volume ) Bind mounts (directory mounts)
host storage location Docker select customize
binding example

type=volume,

src=my-volume,target=/usr/local/data

type=bind,

src=/path/to/data,target=/usr/local/data

Fill the new volume with the contents of the container yes no
Support volume driver yes no

        Example: Enter the following command (win) in the directory getting-started/app. Note that you are using PowerShell instead of the black window enabled by the cmd command . If you still get an error, you can use the editing tool to edit the following command into one line

docker run -dp 3000:3000 `
    -w /app --mount type=bind,src="$(pwd)",target=/app `
    node:18-alpine `
    sh -c "yarn install && yarn run dev"
  • -dp 3000:3000 port mapping
  • -w /app sets the working directory or the current directory where the command runs
  • --mount type=bind,src="$(pwd)",target=/app bind the current directory of the host to the /app directory in the container
  • Node:18-alpine program needs to rely on the base image to run (also written in the above DockerFile)
  • sh -c "yarn install && yarn run dev" uses sh to run the shell, uses yarn install to install dependent packages, and uses yarn run dev to start a service (the dev script is defined in the getting-started\app\package.json file, which will start nodemon)

        nodemon is a tool, the nodemon program is used to automatically restart the container when the program is changed. Of course, we can also use the steps mentioned in the second section to package the program and restart it.

        After the container starts, use the following command to view the running log of docker:

docker logs -f <container-id>

7. Realize mutual communication between containers

        By default, dockers are independent of each other and isolated from each other, so how to make two isolated containers communicate? // For example, the docker that deploys the application communicates with the docker that deploys the database

        The answer is: establish a communication network between dockers, and different containers in the same communication network can communicate with each other . Generally, there are two ways to put a container in a network: 1. Assign it when starting the container, 2. Connect to the network of the existing container. These two methods will be used separately below. // The case of each section needs to close the previously started container to avoid port conflicts

(1) Create a communication network

docker network create todo-app

(2) Start a mysql container and connect to the todo-app network (win PowerShell version)

docker run -d `
     --network todo-app --network-alias mysql `
     -v todo-mysql-data:/var/lib/mysql `
     -e MYSQL_ROOT_PASSWORD=secret `
     -e MYSQL_DATABASE=todos `
     mysql:8.0
  • --network-alias specifies the alias of the running container, use the alias to replace the specific ip, it is more convenient to find the container in the network
  • -v todo-mysql-data:/var/lib/mysql This command will automatically create a container volume todo-mysql-data, and then store the database data in mysql (/var/lib/mysql is the path where the database stores data)

        In order to check whether the database container has been started, you can try to enter the container to check: // The password is secret, which is specified in the script

docker ps // 查看正在运行中的容器
docker exec -it <mysql-container-id> mysql -u root -p

        If you can log in by entering the password, it means that the database has been started. Try the following command to verify that mysql is normal

mysql> SHOW DATABASES;
mysql> exit;

        So when mysql starts, how to find this container? The answer is, use a   nicolaka/netshoot  networking tool . First we install this tool through docker

docker run -it --network todo-app nicolaka/netshoot

        After installation, you can use this DNS tool. //Mysql below is the alias specified when starting the container

dig mysql/容器id

        Then you can see the relevant network information of the mysql container:

        We see that the ip of the mysql container is 172.19.0.2 ( the address in the todo-app network ). Generally speaking, we can locate this address only through the mysql alias, without specifying it explicitly.

(3) Start the application container and connect to the mysql container

        The todo application supports setting some environment variables to specify MySQL connection settings

  • MYSQL_HOST - MySQL server host name
  • MYSQL_USER - database connection user
  • MYSQL_PASSWORD - database connection password
  • MYSQL_DB - specific database of mysql to connect to

        Run the following command in the getting-started\app directory (win PowerShell version)

docker run -dp 3000:3000 `
   -w /app -v "$(pwd):/app" `
   --network todo-app `
   -e MYSQL_HOST=mysql `
   -e MYSQL_USER=root `
   -e MYSQL_PASSWORD=secret `
   -e MYSQL_DB=todos `
   node:18-alpine `
   sh -c "yarn install && yarn run dev"

        After running successfully, you can view the docker startup log

        At this point, we open the program http://localhost:3000/ to add items to the project, and then enter the database container to view the data just added

docker exec -it <mysql-container-id> mysql -p todos  //进入数据库容器
select * from todo_items; // 查看表中数据

        So far, we have realized the mutual communication between the two containers.

8. Use Docker Compose to simplify sharing operations between multiple containers

        Docker Compose is used to help define and share applications in multiple containers. With Compose, we can create a YAML file to define services, and with one command, we can start or stop all services.

(1) Install Docker Compose

If Docker Desktop is installed in the win environment, Docker Compose         is already installed by default  . If it is in the linux environment, it needs to be installed separately. Click here for the installation tutorial . After installation, you can check the version of the tool:

docker compose version

(2) Write Compose file

        Create a docker-compose.yml file in the getting-started\app directory

        Then fill in the following content in the docker-compose.yml file: // For the specific preparation of the document, refer to the document at the end of the article, only the general steps are introduced here

services:
  app:
    image: node:18-alpine
    command: sh -c "yarn install && yarn run dev"
    ports:
      - 3000:3000
    working_dir: /app
    volumes:
      - ./:/app
    environment:
      MYSQL_HOST: mysql
      MYSQL_USER: root
      MYSQL_PASSWORD: secret
      MYSQL_DB: todos

  mysql:
    image: mysql:8.0
    volumes:
      - todo-mysql-data:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: todos

volumes:
  todo-mysql-data:

 (3) Start containers in batches

        Execute the following command in the getting-started\app directory, -d means start in the background

docker compose up -d

        Then you will see the following output information:

         The message shows that both the container volume and the communication network were created, and by default, Docker Compose automatically creates a network for the application stack (that's why we don't need to define a network in the Compose file).

        At this point, all containers have been successfully started. If you installed Docker Desktop, you will see the following interface:

(4) Close and remove containers in batches

        Use the following commands to batch shut down and remove containers

docker compose down

        It should be noted that the above command will not delete the created container volume, and you need to use a separate command to delete the container volume.

Reference documentation: Overview | Docker Documentation

Guess you like

Origin blog.csdn.net/swadian2008/article/details/125242638