Getting Started with Microservices---Docker

Getting Started with Microservices---Docker

1. First introduction to Docker

Docker practical articles

1.1.What is Docker

Although microservices have various advantages, the splitting of services brings great trouble to deployment.

  • In a distributed system, there are many dependent components, and conflicts often occur when different components are deployed.
  • Repeated deployment in hundreds or thousands of services, the environment is not necessarily consistent, and various problems will be encountered

1.1.1. Environmental issues of application deployment

Large-scale projects have many components and the operating environment is more complex. Some problems will be encountered during deployment:

  • Dependencies are complex and compatibility issues are prone to occur

  • Development, testing, and production environments are different
    Insert image description here

For example, in a project, deployment needs to depend on node.js, Redis, RabbitMQ, MySQL, etc. The function libraries and dependencies required for deployment of these services are different and may even conflict. This brings great difficulties to deployment.

1.1.2.Docker solves dependency compatibility issues

Docker has indeed solved these problems cleverly. How does Docker achieve it?

In order to solve the dependency compatibility problem, Docker adopts two methods:

  • Package the application's Libs (function library), Deps (dependencies), and configuration together with the application

  • Put each application into an isolated container to avoid interference with each other
    Insert image description here

The application package packaged in this way not only contains the application itself, but also protects the Libs and Deps required by the application. There is no need to install these on the operating system, and naturally there is no compatibility issue between different applications.

Although the compatibility problem of different applications has been solved, there will be differences in development, testing and other environments, as well as operating system versions. How to solve these problems?

1.1.3.Docker solves differences in operating system environments

To solve the problem of differences in different operating system environments, you must first understand the operating system structure. Taking an Ubuntu operating system as an example, the structure is as follows:
Insert image description here

Structure includes:

  • Computer hardware: such as CPU, memory, disk, etc.
  • System kernel: The kernel of all Linux distributions is Linux, such as CentOS, Ubuntu, Fedora, etc. The kernel can interact with computer hardware and provide kernel instructions to the outside world for operating computer hardware.
  • System applications: applications and function libraries provided by the operating system itself. These function libraries are encapsulations of kernel instructions and are more convenient to use.

The process applied to computer interaction is as follows:

1) The application calls operating system applications (function libraries) to implement various functions

2) The system function library is an encapsulation of the kernel instruction set and will call kernel instructions.

3) Kernel instructions operate computer hardware

Ubuntu and CentOS SpringBoot are both based on the Linux kernel. They have different system applications and provide different function libraries:

At this time, if an Ubuntu version of MySQL application is installed on the CentOS system, when MySQL calls the Ubuntu function library, it will find that it cannot be found or does not match, and an error will be reported:
Insert image description here

How does Docker solve problems in different system environments?
Insert image description here

  • Docker packages the user program together with the system (such as Ubuntu) function library that needs to be called.
  • When Docker runs on different operating systems, it is directly based on the packaged function library and runs with the help of the Linux kernel of the operating system.

As shown in the picture:
Insert image description here

1.1.4. Summary

How does Docker solve the compatibility issues of complex dependencies and dependencies on different components in large projects?

  • Docker allows applications, dependencies, function libraries, and configurations to be packaged together during development to form a portable image.
  • Docker applications run in containers and are isolated from each other using a sandbox mechanism.

How does Docker solve the problem of differences between development, testing, and production environments?

  • The Docker image contains a complete operating environment, including system function libraries, and only relies on the system's Linux kernel, so it can run on any Linux operating system.

Docker is a technology for quickly delivering and running applications. It has the following advantages:

  • The program, its dependencies, and the operating environment can be packaged into an image, which can be migrated to any Linux operating system
  • The sandbox mechanism is used to form an isolated container during runtime, so that each application does not interfere with each other.
  • Startup and removal can be completed with one line of commands, which is convenient and fast

1.2. The difference between Docker and virtual machines

Docker allows an application to run very conveniently on any operating system. The virtual machines we have been exposed to before can also run another operating system in one operating system to protect any application in the system.

What's the difference between the two?

A virtual machine simulates a hardware device in an operating system and then runs another operating system, such as an Ubuntu system in a Windows system, so that any Ubuntu application can be run.

Docker only encapsulates function libraries and does not simulate a complete operating system, as shown in the figure:
Insert image description here

For comparison:

Insert image description here

summary:

Differences between Docker and virtual machines:

  • Docker is a system process; the virtual machine is the operating system in the operating system

  • Docker is small in size, fast in startup, and has good performance; virtual machines are large in size, slow in startup, and have average performance.

1.3.Docker architecture

1.3.1. Images and containers

There are several important concepts in Docker:

Image : Docker packages the application and its required dependencies, function libraries, environment, configuration and other files together, which is called an image.

Container : The process formed after running the application in the image is the container , but Docker will isolate the container process and not be visible to the outside world.

All applications are ultimately composed of code, which are files formed one by one on the hard disk . Only when running will it be loaded into memory and form a process.

An image is a file package formed by packaging an application file on the hard disk, its operating environment, and some system function library files. This package is read-only.

Containers load the programs and functions written in these files into memory and form processes, but they must be isolated. Therefore, an image can be started multiple times to form multiple container processes.

Insert image description here

For example, if you download a QQ, if we package the QQ running file on the disk and its operating system dependencies to form a QQ image. Then you can start QQ multiple times, double or even triple open QQ, and chat with multiple girls.

1.3.2.DockerHub

There are many open source applications, and packaging these applications is often a repetitive task. In order to avoid these duplications of work, people will put their own packaged application images, such as Redis and MySQL images, on the network and share them, just like GitHub's code sharing.

On the one hand, we can share our own images to DockerHub, and on the other hand, we can also pull images from DockerHub:
Insert image description here

1.3.3.Docker architecture

If we want to use Docker to operate images and containers, we must install Docker.

Docker is a CS-based program that consists of two parts:

  • Server: Docker daemon, responsible for processing Docker instructions, managing images, containers, etc.

  • Client: Send instructions to the Docker server through commands or RestAPI. Instructions can be sent to the server locally or remotely.

As shown in the picture:
Insert image description here

1.3.4. Summary

Mirror:

  • Package the application and its dependencies, environment, and configuration together

container:

  • When an image is run, it becomes a container. One image can run multiple containers.

Docker structure:

  • Server: receives commands or remote requests, operates images or containers

  • Client: Send commands or requests to the Docker server

DockerHub:

  • An image hosting server, similar to Alibaba Cloud image service, collectively called DockerRegistry

1.4.Install Docker

Reference material " CentOS7 Installing Docker "

Enterprise deployment generally uses the Linux operating system, among which CentOS distribution accounts for the largest proportion, so we install Docker under CentOS. Refer to the documentation in the pre-course materials:

2.Basic operations of Docker

2.1. Mirror operation

2.1.1.Image name

First, let’s take a look at the name composition of the image:

  • Mirror names generally consist of two parts: [repository]:[tag].
  • When no tag is specified, the default is latest, which represents the latest version of the image.

As shown in the figure:
Insert image description here
mysql here is the repository, 5.7 is the tag, and together they are the image name, which represents the 5.7 version of the MySQL image.

2.1.2. Mirror command

Common mirroring operation commands are shown in the figure:
Insert image description here

2.1.3.Case 1-Pull and view the image

Requirement: Pull an nginx image from DockerHub and view it

1) First go to the image warehouse to search for the nginx image, such as DockerHub :
Insert image description here

2) According to the viewed image name, pull the image you need through the command: docker pull nginx
Insert image description here

3) View the pulled image through the command: docker images

Insert image description here

2.1.4.Case 2-Save and import image

Requirement: Use docker save to export the nginx image to disk, and then load it back through load

1) Use the docker xx --help command to view the syntax of docker save and docker load

For example, to view the save command usage, you can enter the command:

docker save --help

result:
Insert image description here

Command format:

docker save -o [保存的目标文件名称] [镜像名称]

2) Use docker save to export the image to disk

Run command:

docker save -o nginx.tar nginx:latest

The result is as shown below:
Insert image description here

3) Use docker load to load the image

First delete the local nginx image:

docker rmi nginx:latest

Then run the command to load the local file:

docker load -i nginx.tar

result:
Insert image description here

2.1.5.Practice

Requirement: Go to DockerHub to search and pull a Redis image

Target:

1) Go to DockerHub to search for the Redis image

2) Check the name and version of the Redis image

3) Use the docker pull command to pull the image

4) Use the docker save command to package redis:latest into a redis.tar package

5) Use docker rmi to delete local redis:latest

6) Use docker load to reload the redis.tar file

2.2. Container operations

2.2.1. Container related commands

The commands for container operation are as follows:
Insert image description here

Containers protect three states:

  • Running: The process is running normally
  • Pause: The process is suspended, the CPU is no longer running, and the memory is not released.
  • Stop: The process is terminated and the memory, CPU and other resources occupied by the process are recycled.

in:

  • docker run: Create and run a container in a running state

  • docker pause: Pause a running container

  • docker unpause: Resume a container from a paused state

  • docker stop: Stop a running container

  • docker start: Make a stopped container run again

  • docker rm: delete a container

2.2.2. Case - Create and run a container

Command to create and run nginx container:

docker run --name containerName -p 80:80 -d nginx

Command interpretation:

  • docker run: Create and run a container
  • –name: Give the container a name, such as mn
  • -p: Map the host port to the container port. The left side of the colon is the host port and the right side is the container port.
  • -d: Run the container in the background
  • nginx: image name, such as nginx

The parameter here -pis to map the container port to the host port.

By default, the container is an isolated environment. If we directly access port 80 of the host, we will definitely not be able to access nginx in the container.

Now, associate the container's 80 with the host's 80. When we access the host's port 80, it will be mapped to the container's 80, so that we can access nginx:
Insert image description here

2.2.3. Case - Enter the container and modify the file

Requirements : Enter the Nginx container, modify the content of the HTML file, and add "Welcome to Chuanzhi Education"

Tip : Use the docker exec command to enter the container.

Steps :

1) Enter the container. The command to enter the nginx container we just created is:

docker exec -it mn bash

Command interpretation:

  • docker exec: Enter inside the container and execute a command

  • -it: Create a standard input and output terminal for the currently entered container, allowing us to interact with the container

  • mn: the name of the container to enter

  • bash: command executed after entering the container. bash is a linux terminal interactive command

2) Enter the directory where nginx’s HTML is located /usr/share/nginx/html

An independent Linux file system will be simulated inside the container, which looks like a Linux server:
Insert image description here

The environment, configuration, and running files of nginx are all in this file system, including the html file we want to modify.

View the nginx page on the DockerHub website and you can know the location of the nginx html directory./usr/share/nginx/html

We execute the command and enter the directory:

cd /usr/share/nginx/html

View the files in the directory:
Insert image description here

3) Modify the content of index.html

There is no vi command in the container and cannot be modified directly. We use the following command to modify it:

sed -i -e 's#Welcome to nginx#Docker欢迎您#g' -e 's#<head>#<head><meta charset="utf-8">#g' index.html

Access your own virtual machine address in the browser, for example mine is: http://192.168.150.101, and you can see the result:

2.2.4. Summary

What are the common parameters of the docker run command?

  • –name: Specify the container name
  • -p: Specify port mapping
  • -d: Let the container run in the background

Command to view container logs:

  • docker logs
  • Add the -f parameter to continuously view the logs

View container status:

  • docker ps
  • docker ps -a View all containers, including stopped ones

2.3. Data volumes (container data management)

In the previous nginx case, when modifying the nginx html page, you need to enter nginx. And because there is no editor, modifying files is also very troublesome.

This is the consequence of the coupling between the container and the data (files in the container).
Insert image description here

To solve this problem, the data must be decoupled from the container, which requires the use of data volumes.

2.3.1.What is a data volume?

A data volume is a virtual directory that points to a directory in the host file system.
Insert image description here

Once the data volume is mounted, all operations on the container will be applied to the host directory corresponding to the data volume.

In this way, when we operate the /var/lib/docker/volumes/html directory on the host, it is equivalent to operating the /usr/share/nginx/html directory in the container.

2.3.2.Dataset operation commands

The basic syntax for data volume operations is as follows:

docker volume [COMMAND]

The docker volume command is a data volume operation. The next operation is determined according to the command following the command:

  • create creates a volume
  • inspect displays information about one or more volumes
  • ls lists all volumes
  • prune deletes unused volumes
  • rm deletes one or more specified volumes

2.3.3.Create and view data volumes

Requirement : Create a data volume and check the directory location of the data volume on the host

① Create data volume

docker volume create html

② View all data

docker volume ls

result:

Insert image description here

③ View data volume details volume

docker volume inspect html

result:
Insert image description here

As you can see, the host directory associated with the html data volume we created is /var/lib/docker/volumes/html/_dataa directory.

Summary :

The role of data volumes:

  • Separate and decouple the container from the data to facilitate the operation of data in the container and ensure data security

Data volume operations:

  • docker volume create: Create a data volume
  • docker volume ls: View all data volumes
  • docker volume inspect: View data volume details, including associated host directory location
  • docker volume rm: delete the specified data volume
  • docker volume prune: delete all unused data volumes

2.3.4.Mount data volume

When we create a container, we can use the -v parameter to mount a data volume to a directory in a container. The command format is as follows:

docker run \
  --name mn \
  -v html:/root/html \
  -p 8080:80
  nginx \

The -v here is the command to mount the data volume:

  • -v html:/root/htm: Mount the html data volume to the directory /root/html in the container

2.3.5. Case - Mount data volume to nginx

Requirement : Create an nginx container and modify the index.html content in the html directory in the container

Analysis : In the last case, we entered the nginx container and already knew the location of nginx's html directory /usr/share/nginx/html. We need to mount this directory to the html data volume to facilitate the operation of its contents.

Tip : Use the -v parameter to mount the data volume when running the container

step:

① Create a container and mount the data volume to the HTML directory in the container

docker run --name mn -v html:/usr/share/nginx/html -p 80:80 -d nginx

② Enter the location of the html data volume and modify the HTML content

# 查看html数据卷的位置
docker volume inspect html
# 进入该目录
cd /var/lib/docker/volumes/html/_data
# 修改文件
vi index.html

2.3.6. Case - Mount a local directory for MySQL

Containers can not only mount data volumes, but can also be mounted directly to the host directory. The relationships are as follows:

  • With data volume mode: host directory --> data volume --> directory in the container
  • Direct mounting mode: host directory -> directory in the container

As shown in the picture:
Insert image description here

Syntax :

The syntax for directory mounting and data volume mounting is similar:

  • -v [host directory]:[directory in container]
  • -v [host file]:[file in container]

Requirements : Create and run a MySQL container, and mount the host directory directly to the container

The implementation idea is as follows:

1) Upload the mysql.tar file in the pre-course materials to the virtual machine and load it as an image through the load command

2) Create directory /tmp/mysql/data

3) Create the directory /tmp/mysql/conf and upload the hmy.cnf file provided in the pre-course materials to /tmp/mysql/conf

4) Go to DockerHub to check the information, create and run the MySQL container, the requirements are:

① Mount /tmp/mysql/data to the data storage directory in the mysql container

② Mount /tmp/mysql/conf/hmy.cnf to the configuration file of the mysql container

③ Set MySQL password

2.3.7. Summary

In the docker run command, use the -v parameter to mount files or directories into the container:

  • -v volume name: directory within the container
  • -v host file: container content
  • -v host directory: directory within the container

Data volume mounting and directory mounting directly

  • The coupling of data volume mounting is low, and docker manages the directory. However, the directory is deep and difficult to find.
  • Directory mounting is highly coupled and requires us to manage the directory ourselves, but the directory is easy to find and view.

3.Dockerfile custom image

Common images can be found on DockerHub, but for projects we write ourselves, we must build the image ourselves.

To customize an image, you must first understand the structure of the image.

3.1. Mirror structure

An image is a package of an application and its required system function libraries, environment, configuration, and dependencies.

Let’s take MySQL as an example to look at the structure of the image:

Insert image description here

Simply put, an image is a file formed by adding application files, configuration files, dependency files, etc. on the basis of the system function library and operating environment, and then writing a startup script and packaging it together.

When we build an image, we actually implement the above packaging process.

3.2.Dockerfile syntax

When building a custom image, you do not need to copy and package each file.

We only need to tell Docker the composition of our image, which BaseImages are needed, what files need to be copied, what dependencies need to be installed, and what the startup script is. Docker will help us build the image in the future.

The file describing the above information is the Dockerfile file.

Dockerfile is a text file that contains instructions (Instructions) that describe what operations are to be performed to build the image. Each instruction will form a layer.

Insert image description here

For updated detailed syntax instructions, please refer to the official website documentation: https://docs.docker.com/engine/reference/builder

3.3.Build Java project

3.3.1. Building Java projects based on Ubuntu

Requirements: Build a new image based on the Ubuntu image and run a java project

  • Step 1: Create a new empty folder docker-demo
    Insert image description here

  • Step 2: Copy the docker-demo.jar file in the pre-course materials to the docker-demo directory
    Insert image description here

  • Step 3: Copy the jdk8.tar.gz file in the pre-course materials to the docker-demo directory
    Insert image description here

  • Step 4: Copy the Dockerfile provided in the pre-course materials to the docker-demo directory
    Insert image description here

    The contents are as follows:

    # 指定基础镜像
    FROM ubuntu:16.04
    # 配置环境变量,JDK的安装目录
    ENV JAVA_DIR=/usr/local
    
    # 拷贝jdk和java项目的包
    COPY ./jdk8.tar.gz $JAVA_DIR/
    COPY ./docker-demo.jar /tmp/app.jar
    
    # 安装JDK
    RUN cd $JAVA_DIR \
     && tar -xf ./jdk8.tar.gz \
     && mv ./jdk1.8.0_144 ./java8
    
    # 配置环境变量
    ENV JAVA_HOME=$JAVA_DIR/java8
    ENV PATH=$PATH:$JAVA_HOME/bin
    
    # 暴露端口
    EXPOSE 8090
    # 入口,java项目的启动命令
    ENTRYPOINT java -jar /tmp/app.jar
    
  • Step 5: Enter docker-demo

    Upload the prepared docker-demo to any directory of the virtual machine, and then enter the docker-demo directory

  • Step 6: Run the command:

    docker build -t javaweb:1.0 .
    

Finally visit http://192.168.150.101:8090/hello/count and change the ip to your virtual machine ip

3.3.2. Build Java projects based on java8

Although we can add any installation packages we need and build the image based on the Ubuntu base image, it is quite troublesome. So in most cases, we can make modifications on some basic images with some software installed.

For example, to build a Java project image, you can build it based on the prepared JDK base image.

Requirements: Build a Java project as a mirror based on the java:8-alpine mirror

The implementation idea is as follows:

  • ① Create a new empty directory, then create a new file in the directory and name it Dockerfile

  • ② Copy the docker-demo.jar provided in the pre-course materials to this directory

  • ③ Write Dockerfile:

    • a) Based on java:8-alpine as the base image

    • b) Copy app.jar to the image

    • c) Exposed port

    • d) Write entry ENTRYPOINT

      The content is as follows:

      FROM java:8-alpine
      COPY ./app.jar /tmp/app.jar
      EXPOSE 8090
      ENTRYPOINT java -jar /tmp/app.jar
      
  • ④ Use the docker build command to build the image

  • ⑤ Use docker run to create a container and run it

3.4. Summary

summary:

  1. The essence of a Dockerfile is a file that describes the image building process through instructions.

  2. The first line of Dockerfile must be FROM to build from a base image

  3. The base image can be a base operating system, such as Ubuntu. It can also be a mirror made by others, for example: java:8-alpine

4.Docker-Compose

Docker Compose can help us quickly deploy distributed applications based on Compose files without having to manually create and run containers one by one!
Insert image description here

4.1. First introduction to DockerCompose

A compose file is a text file that defines through instructions how each container in the cluster should run. The format is as follows:

version: "3.8"
 services:
  mysql:
    image: mysql:5.7.25
    environment:
     MYSQL_ROOT_PASSWORD: 123 
    volumes:
     - "/tmp/mysql/data:/var/lib/mysql"
     - "/tmp/mysql/conf/hmy.cnf:/etc/mysql/conf.d/hmy.cnf"
  web:
    build: .
    ports:
     - "8090:8090"

The Compose file above describes a project, which contains two containers:

  • mysql: a mysql:5.7.25container built based on the image and mounted with two directories
  • web: a docker buildmirror container based on a temporary build, mapping port 8090

Detailed syntax reference for DockerCompose: official website

In fact, the DockerCompose file can be regarded as writing multiple docker run commands to one file, but the syntax is slightly different.

4.2. Install DockerCompose

Reference pre-course materials

4.3. Deploy microservice cluster

Requirement : Deploy the previously learned cloud-demo microservice cluster using DockerCompose

Implementation ideas :

① Check the cloud-demo folder provided in the pre-class materials. The docker-compose file has been written in it.

② Modify your cloud-demo project and name the database and nacos address as the service name in docker-compose

③ Use the maven packaging tool to package each microservice in the project as app.jar

④ Copy the packaged app.jar to each corresponding subdirectory in cloud-demo

⑤ Upload cloud-demo to the virtual machine and deploy it using docker-compose up -d

4.3.1.compose file

Check the cloud-demo folder provided in the pre-class materials. The docker-compose file has been written in it, and an independent directory has been prepared for each microservice:
Insert image description here

The content is as follows:

version: "3.2"

services:
  nacos:
    image: nacos/nacos-server
    environment:
      MODE: standalone
    ports:
      - "8848:8848"
  mysql:
    image: mysql:5.7.25
    environment:
      MYSQL_ROOT_PASSWORD: 123
    volumes:
      - "$PWD/mysql/data:/var/lib/mysql"
      - "$PWD/mysql/conf:/etc/mysql/conf.d/"
  userservice:
    build: ./user-service
  orderservice:
    build: ./order-service
  gateway:
    build: ./gateway
    ports:
      - "10010:10010"

As you can see, it contains 5 services:

  • nacos: Serves as registration center and configuration center
    • image: nacos/nacos-server: Built based on nacos/nacos-server image
    • environment:Environment variables
      • MODE: standalone: Single point mode startup
    • ports:Port mapping, port 8848 is exposed here
  • mysql:database
    • image: mysql:5.7.25:The mirror version is mysql:5.7.25
    • environment:Environment variables
      • MYSQL_ROOT_PASSWORD: 123: Set the password of the database root account to 123
    • volumes: Data volume mounting. The data and conf directories of mysql are mounted here, which contain the data I prepared in advance.
  • userservice, orderservice, gateway: are all temporarily built based on Dockerfile

Looking at the mysql directory, you can see that the cloud_order and cloud_user tables have been prepared:
Insert image description here

Looking at the microservice directory, you can see that they all contain Dockerfile files:
Insert image description here

The content is as follows:

FROM java:8-alpine
COPY ./app.jar /tmp/app.jar
ENTRYPOINT java -jar /tmp/app.jar

4.3.2. Modify microservice configuration

Because microservices will be deployed as docker containers in the future, and the interconnection between containers is not through IP addresses, but through container names. Here we modify the mysql and nacos addresses of order-service, user-service, and gateway services to access based on container names.

As follows:

spring:
  datasource:
    url: jdbc:mysql://mysql:3306/cloud_order?useSSL=false
    username: root
    password: 123
    driver-class-name: com.mysql.jdbc.Driver
  application:
    name: orderservice
  cloud:
    nacos:
      server-addr: nacos:8848 # nacos服务地址

4.3.3.Packaging

Next we need to package each of our microservices. Because the jar package name in the Dockerfile we saw before is app.jar, each of our microservices needs to use this name.

This can be achieved by modifying the packaging name in pom.xml. Each microservice needs to be modified:

<build>
  <!-- 服务打包的最终名称 -->
  <finalName>app</finalName>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
  </plugins>
</build>

After packaging:
Insert image description here

4.3.4. Copy the jar package to the deployment directory

The compiled and packaged app.jar file needs to be placed in the same directory as the Dockerfile. Note: The app.jar of each microservice is placed in the directory corresponding to the service name, don’t make a mistake.

user-service:
Insert image description here

order-service:
Insert image description here

gateway:
Insert image description here

4.3.5.Deployment

Finally, we need to upload the entire cloud-demo folder to the virtual machine for DockerCompose deployment.

Upload to any directory:
Insert image description here

deploy:

Enter the cloud-demo directory and run the following command:

docker-compose up -d

5.Docker image warehouse

5.1. Build a private mirror warehouse

Reference material " CentOS7 Installing Docker "

5.2. Push and pull images

To push an image to a private image service, you must first tag it. The steps are as follows:

① Retag the local image, and the name prefix is ​​the address of the private warehouse: 192.168.150.101:8080/

docker tag nginx:latest 192.168.150.101:8080/nginx:1.0 

② Push the image

docker push 192.168.150.101:8080/nginx:1.0 

③ Pull the image

docker pull 192.168.150.101:8080/nginx:1.0 

Guess you like

Origin blog.csdn.net/CSDN_Admin0/article/details/131919335