Summary of basic tutorials for getting started with Docker (all dry stuff, simple and practical)

Docker containerization technology has become more and more popular in recent years. If you want to quickly get started with docker technology and be able to master and use it in a short time, here Teacher Pan has compiled a basic tutorial for getting started with Docker. It can be said that it is full of useful information, simple and practical, and there is no The unnecessary in-depth technical interference will disturb you, it is purely about applying what you have learned, let’s take a look together!

Chapter 1 First introduction to Docker

1.1.What is Docker

Although microservices have various advantages, the splitting of services brings great trouble to deployment.

  • In a distributed system, there are many dependent components, and conflicts often occur when different components are deployed.
  •  Repeated deployment in hundreds or thousands of services, the environment is not necessarily consistent, and various problems will be encountered

1.1.1. Environmental issues in application deployment.
Large-scale projects have many components and the operating environment is more complex. Some problems will be encountered during deployment:

  • Dependencies are complex and compatibility issues are prone to occur
  • Development, testing, and production environments are different


 

uploading.4e448015.gif

Uploading...Re-upload Cancel
For example, in a project, deployment needs to depend on node.js, Redis, RabbitMQ, MySQL, etc. The function libraries and dependencies required for deployment of these services are different, and there may even be conflicts. This brings great difficulties to deployment.
1.1.2. Docker solves dependency compatibility issues
. Docker solves these problems cleverly. How does Docker achieve it? In order to solve the dependency compatibility problem, Docker adopts two methods:

  • Package the application's Libs (function library), Deps (dependencies), and configuration together with the application
  • Put each application into an isolated container to avoid interference with each other


 

uploading.4e448015.gif

Uploading...Re-upload Cancel
In this way, the packaged application package not only contains the application itself, but also protects the Libs and Deps required by the application. There is no need to install these on the operating system, and naturally there will be no compatibility issues between different applications.

Although the compatibility problem of different applications has been solved, there will be differences in development, testing and other environments, as well as operating system versions. How to solve these problems?
1.1.3.Docker solves the differences in operating system environments.
To solve the problem of differences in operating system environments, you must first understand the operating system structure. Taking an Ubuntu operating system as an example, the structure is as follows:
 

uploading.4e448015.gif

Uploading...Re-upload cancellation
structure includes:

  • Computer hardware: such as CPU, memory, disk, etc.
  • System kernel: The kernel of all Linux distributions is Linux, such as CentOS, Ubuntu, Fedora, etc. The kernel can interact with computer hardware and provide kernel instructions to the outside world for operating computer hardware.
  • System applications: applications and function libraries provided by the operating system itself. These function libraries are encapsulations of kernel instructions and are more convenient to use.

The process applied to computer interaction is as follows:

  • 1) The application calls operating system applications (function libraries) to implement various functions
  • 2) The system function library is an encapsulation of the kernel instruction set and will call kernel instructions.
  • 3) Kernel instructions operate computer hardware


Ubuntu and CentOS SpringBoot are both based on the Linux kernel. They have different system applications and provide different function libraries:
 

uploading.4e448015.gif

Uploading...Re-upload canceled
At this time, if an Ubuntu version of MySQL application is installed on the CentOS system, when MySQL calls the Ubuntu function library, it will find that it cannot be found or does not match, and an error will be reported: How Docker solves different system

environments The problem?

  • Docker packages the user program together with the system (such as Ubuntu) function library that needs to be called.
  • When Docker runs on different operating systems, it is directly based on the packaged function library and runs with the help of the Linux kernel of the operating system.


As shown in the picture:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
1.1.4. Summary
How does Docker solve the compatibility issues of complex dependencies in large projects and the dependencies of different components?

  • Docker allows applications, dependencies, function libraries, and configurations to be packaged together during development to form a portable image.
  • Docker applications run in containers and are isolated from each other using a sandbox mechanism.


How does Docker solve the problem of differences between development, testing, and production environments?

  • The Docker image contains a complete operating environment, including system function libraries, and only relies on the system's Linux kernel, so it can run on any Linux operating system.


Docker is a technology for quickly delivering and running applications. It has the following advantages:

  • The program, its dependencies, and the operating environment can be packaged into an image, which can be migrated to any Linux operating system
  • The sandbox mechanism is used to form an isolated container during runtime, so that each application does not interfere with each other.
  • Startup and removal can be completed with one line of commands, which is convenient and fast

1.2. The difference between Docker and virtual machines

Docker allows an application to run very conveniently on any operating system. The virtual machines we have been exposed to before can also run another operating system in one operating system to protect any application in the system.

What's the difference between the two?
A virtual machine simulates a hardware device in an operating system and then runs another operating system, such as an Ubuntu system in a Windows system, so that any Ubuntu application can be run.
Docker only encapsulates function libraries and does not simulate a complete operating system, as shown in the figure:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
Comparison:

Summary:
Differences between Docker and virtual machines:

  • Docker is a system process; the virtual machine is the operating system in the operating system
  • Docker is small in size, fast in startup, and has good performance; virtual machines are large in size, slow in startup, and have average performance.

1.3.Docker architecture

1.3.1. Mirrors and containers
There are several important concepts in Docker:

  • Image: Docker packages the application and its required dependencies, function libraries, environment, configuration and other files together, which is called an image.
  • Container: The process formed after running the application in the image is the container, but Docker will isolate the container process and not be visible to the outside world.

All applications are ultimately composed of code, which are files formed one by one on the hard disk. Only when running will it be loaded into memory and form a process.

An image is a file package formed by packaging an application file on the hard disk, its operating environment, and some system function library files. This package is read-only.

Containers load the programs and functions written in these files into memory and form processes, but they must be isolated. Therefore, an image can be started multiple times to form multiple container processes.
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
For example, if you download a QQ, if we package the QQ running file on the disk and its operating system dependencies to form a QQ image. Then you can start QQ multiple times, double or even triple open QQ, and chat with multiple girls.

1.3.2.DockerHub

There are many open source applications, and packaging these applications is often a repetitive task. In order to avoid these duplications of work, people will put their own packaged application images, such as Redis and MySQL images, on the Internet and share them, just like GitHub's code sharing.

On the one hand, we can share our own images to DockerHub, and on the other hand, we can also pull images from DockerHub:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
1.3.3.Docker architecture
If we want to use Docker to operate images and containers, we must install Docker.

Docker is a CS-based program that consists of two parts:

  • Server: Docker daemon, responsible for processing Docker instructions, managing images, containers, etc.
  • Client: Send instructions to the Docker server through commands or RestAPI. Instructions can be sent to the server locally or remotely.

As shown in the picture:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
1.3.4. Summary
Image:
Package the application and its dependencies, environment, and configuration together.
Container:
When the image is run, it is a container. One image can run multiple containers.
Docker structure:
Server: receive commands or Remote request and operate images or containers
Client: Send commands or requests to the Docker server
DockerHub:
An image hosting server, similar to Alibaba Cloud Image Service, collectively referred to asDockerRegistry

1.4.Install Docker

Tutorial on installing, running and uninstalling Docker

Chapter 2 Basic Operations of Docker

2.1. Mirror operation

2.1.1. Image name
First, let’s look at the composition of the image name:

Mirror names generally consist of two parts: [repository]:[tag].
When no tag is specified, the default is latest, representing the latest version of the image,
as shown in the figure:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
The mysql here is repository, 5.7 is tag, together it is the image name, representing the 5.7 version of the MySQL image.
2.1.2. Mirror command
Common mirror operation commands are as shown in the figure:

Case 1-Pull and view the image
Requirement: Pull an nginx image from DockerHub and view it

1) First go to the image warehouse to search for the nginx image, such as DockerHub :
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
2) According to the viewed image name, pull the image you need, through the command: docker pull nginx

3) Through the command: docker images View the pulled image

2.1.4. Case 2-Save and import the image
Requirement: Utilization docker saveExport the nginx image to disk and then loadload it back

1) Use docker xx --helpcommands to view docker savethe docker loadsyntax of sum

For example, to view savecommand usage, you can enter the command:

 
 

Copy code

  1. docker save --help

result:
 

uploading.4e448015.gif

Uploading...Re-upload cancel
command format:

 
 

Copy code

  1. docker save -o [saved target file name] [image name]

2) Use docker save to export the image to disk

Run command:

 
 

Copy code

  1. docker save -o nginx.tar nginx:latest

The result is as shown below:
 

uploading.4e448015.gif

Uploading...re-upload cancel
3) Use docker load to load the image

First delete the local nginx image:

 
 

Copy code

  1. docker rmi nginx:latest

Then run the command to load the local file:

 
 

Copy code

  1. docker load -i nginx.tar

result:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
2.1.5. Exercise
Requirement: Go to DockerHub to search and pull a Redis image

Target:

1) Go to DockerHub to search for the Redis image

2) Check the name and version of the Redis image

3) Use the docker pull command to pull the image

4) Use the docker save command to package redis:latest into a redis.tar package

5) Use docker rmi to delete local redis:latest

6) Use docker load to reload the redis.tar file

2.2. Container operations

2.2.1. Container-related commands
The commands for container operations are as shown in the figure:
 

uploading.4e448015.gif

Uploading...re-uploading and canceling
container protection have three states:

  • Running: The process is running normally
  • Pause: The process is suspended, the CPU is no longer running, and the memory is not released.
  • Stop: The process is terminated and the memory, CPU and other resources occupied by the process are recycled.


in:

  • docker run: Create and run a container in a running state
  • docker pause: Pause a running container
  • docker unpause: Resume a container from a paused state
  • docker stop: Stop a running container
  • docker start: Make a stopped container run again
  • docker rm: delete a container

2.2.2. Case - Create and run a container
Command to create and run nginx container:

 
 

Copy code

  1. docker run --name containerName -p 80:80 -d nginx

Command interpretation:

  • docker run: Create and run a container
  • –name: Give the container a name, such as mn
  • -p: Map the host port to the container port. The left side of the colon is the host port and the right side is the container port.
  • -d: Run the container in the background
  • nginx: image name, such as nginx

The parameter here -pis to map the container port to the host port.

By default, the container is an isolated environment. If we directly access the host's 80port, we will definitely not be able to access the container nginx.

Now, associate port 80 of the container with port 80 of the host. When we access port 80 of the host, it will be mapped to port 80 of the container, so that we can access nginx:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
2.2.3. Case - Enter the container, modify the file
requirements: Enter Nginxthe container, modify HTMLthe file content, and add "Welcome to Chuanzhi Education"

Tip: You need to use docker execthe command to enter the container.

step:

1) Enter the container. The command to enter the nginx container we just created is:

 
 

Copy code

  1. docker exec -it mn bash

Command interpretation:

  • docker exec: Enter inside the container and execute a command
  • -it: Create a standard input and output terminal for the currently entered container, allowing us to interact with the container
  • mn: the name of the container to enter
  • bash: command executed after entering the container. bash is a linux terminal interactive command

2) Enter the directory where nginx’s HTML is located /usr/share/nginx/html

An independent Linux file system will be simulated inside the container, which looks like a Linux server:
 

uploading.4e448015.gif

Uploading...Re-upload and cancel
nginx's environment, configuration, and running files are all in this file system, including the html file we want to modify.

View the nginx page on the DockerHub website and you can know the location of the nginx html directory./usr/share/nginx/html

We execute the command and enter the directory:

 
 

Copy code

  1. cd /usr/share/nginx/html

View the files in the directory:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
3) Modify the content of index.html
There is no vi command in the container, so it cannot be modified directly. We use the following command to modify it:

 
 

Copy code

  1. sed -i -e 's#Welcome to nginx#chuanzhi education welcomes you#g' -e 's#<head>#<head><meta charset="utf-8">#g' index.html

Access your own virtual machine address in the browser, for example, mine is: http://192.168.150.101, and you can see the result:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
2.2.4. Summary
docker run What are the common parameters of the command?

  • –name: Specify the container name
  • -p: Specify port mapping
  • -d: Let the container run in the background


Command to view container logs:

  • docker logs
  • Add the -f parameter to continuously view the logs


View container status:

  • docker ps
  • docker ps -a View all containers, including stopped ones

2.3. Data volumes (container data management)

In the previous nginx case, when modifying the nginx html page, you need to enter nginx. And because there is no editor, modifying files is also very troublesome.

This is the consequence of the coupling between the container and the data (files in the container).
 

uploading.4e448015.gif

Uploading...Re-upload canceled
To solve this problem, the data must be decoupled from the container, which requires the use of data volumes.
2.3.1. What is a data volume?
A data volume (volume) is a virtual directory that points to a directory in the host file system.

Once the data volume is mounted, all operations on the container will be applied to the host directory corresponding to the data volume.

In this way, when we operate the directory of the host , it is equivalent to operating the directory /var/lib/docker/volumes/htmlin the container. 2.3.2. Data set operation commands/usr/share/nginx/html

The basic syntax for data volume operations is as follows:

 
 

Copy code

  1. docker volume [COMMAND]

docker volumeThe command is a data volume operation. The next operation is determined according to the command that follows the command:

  • create creates a volume
  • inspect: Display information about one or more volumes
  • ls: List all volumes
  • prune: Delete unused volumes
  • rm: Delete one or more specified volumes


Create and view data volumes
Requirements: Create a data volume and view the directory location of the data volume on the host
① Create a data volume

 
 

Copy code

  1. docker volume create html

② View all data volumes

 
 

Copy code

  1. docker volume ls

result:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
③ View data volume details volume

 
 

Copy code

  1. docker volume inspect html

result:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
You can see that the host directory associated with the html data volume we created is /var/lib/docker/volumes/html/_dataa directory.
Summary:
The role of data volumes:

  • Separate and decouple the container from the data to facilitate the operation of data in the container and ensure data security


Data volume operations:

  • docker volume create: Create a data volume
  • docker volume ls: View all data volumes
  • docker volume inspect: View data volume details, including associated host directory location
  • docker volume rm: delete the specified data volume
  • docker volume prune: delete all unused data volumes

2.3.4. Mounting a data volume
When we create a container, we can use  -v parameters to mount a data volume to a directory in a container. The command format is as follows:

 
 

Copy code

  1. docker run \
  2. --name mn \
  3. -v html:/root/html \
  4. -p 8080:80
  5. nginx \

Here -vis the command to mount the data volume:

-v html:/root/htm : Mount the html data volume to /root/htmlthis directory in the container

2.3.5. Case - Mount a data volume to nginx
Requirement: Create a nginxcontainer and modify index.htmlthe contents of the html directory in the container

Analysis: In the last case, we entered the nginx container and already know the location of nginx's html directory /usr/share/nginx/html . We need to mount this directory to htmlthis data volume to facilitate the operation of its contents.

 -v Tip: Use parameters to mount the data volume when running the container

step:

① Create a container and mount the data volume to the HTML directory in the container

 
 

Copy code

  1. docker run --name mn -v html:/usr/share/nginx/html -p 80:80 -d nginx

② Enter the location of the html data volume and modify the HTML content

 
 

Copy code

  1. # Check the location of the html data volume
  2. docker volume inspect html
  3. # Enter the directory
  4. cd /var/lib/docker/volumes/html/_data
  5. #Modify file
  6. vi index.html

2.3.6. Case - Mounting a local directory for MySQL. The
container can not only mount the data volume, but also directly mount it to the host directory. The relationships are as follows:

  • With data volume mode: host directory –> data volume –> directory in container
  • Direct mounting mode: host directory -> directory in the container


As shown in the picture:
 

uploading.4e448015.gif

Uploading...Re-upload cancel
Syntax:
The syntax for directory mounting and data volume mounting is similar:

  • -v [host directory]:[directory in container]
  • -v [host file]:[file in container]

Requirements: Create and run a MySQL container, and mount the host directory directly to the container

The implementation idea is as follows:

1) Upload the mysql.tar file in the pre-course materials to the virtual machine and load it as an image through the load command

2) Create directory/tmp/mysql/data

3) Create a directory and upload the files /tmp/mysql/confprovided by the pre-class materials tohmy.cnf/tmp/mysql/conf

4) DockerHubCheck the information, create and run the MySQL container, the requirements are:

① Mount /tmp/mysql/datato the data storage directory in the mysql container

② Mount /tmp/mysql/conf/hmy.cnfthe configuration file to the mysql container

③ Set MySQL password
2.3.7.
docker run In the summary command, use the -v parameter to mount the file or directory into the container:

  • -v volume name: directory within the container
  • -v host file: container content
  • -v host directory: directory within the container


Data volume mounting and directory mounting directly

  • The coupling of data volume mounting is low, and docker manages the directory. However, the directory is deep and difficult to find.
  • Directory mounting is highly coupled and requires us to manage the directory ourselves, but the directory is easy to find and view.

Chapter 3 Dockerfile Custom Image

Common images can be found on DockerHub, but for projects we write ourselves, we must build the image ourselves.

To customize an image, you must first understand the structure of the image.

3.1. Mirror structure

An image is a package of an application and its required system function libraries, environment, configuration, and dependencies.

Let’s take MySQL as an example to look at the structure of the image:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
Simply put, an image is a file that is formed by adding application files, configuration files, dependency files and other combinations based on the system function library and operating environment, and then writing a startup script and packaging them together.

When we build an image, we actually implement the above packaging process.

3.2.Dockerfile syntax

When building a custom image, you do not need to copy and package each file.

We only need to tell Docker the composition of our image, which BaseImages are needed, what files need to be copied, what dependencies need to be installed, and what the startup script is. Docker will help us build the image in the future.

The file describing the above information is Dockerfilea file.

DockerfileIt is a text file that contains instructions one by one. The instructions are used to describe what operations are to be performed to build the image. Each instruction forms a layer Layer.
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
Update detailed syntax instructions, please refer to the official website documentation: https://docs.docker.com/engine/reference/builder

3.3.Build Java project

3.3.1. Build a Java project based on Ubuntu
Requirements: Build a new image based on the Ubuntu image and run a java project

Step 1: Create a new empty folderdocker-demo
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
Step 2: Copy the files in the pre-class materials docker-demo.jarto docker-demothis directory

Step 3: Copy the files in the pre-class materials jdk8.tar.gzto docker-demothis directory

Step 4: Copy the Dockerfile provided in the pre-class materials to the docker-demo

directory The content is as follows:

 
 

Copy code

  1. #Specify base image
  2. FROM ubuntu:16.04
  3. # Configure environment variables, JDK installation directory
  4. ENV JAVA_DIR=/usr/local
  5. #Copy the packages of jdk and java projects
  6. COPY ./jdk8.tar.gz $JAVA_DIR/
  7. COPY ./docker-demo.jar /tmp/app.jar
  8. # Install JDK
  9. RUN cd $JAVA_DIR \
  10. && tar -xf ./jdk8.tar.gz \
  11. && mv ./jdk1.8.0_144 ./java8
  12. # Configure environment variables
  13. ENV JAVA_HOME=$JAVA_DIR/java8
  14. ENV PATH=$PATH:$JAVA_HOME/bin
  15. #Expose port
  16. EXPOSE 8090
  17. # Entry, startup command of java project
  18. ENTRYPOINT java -jar /tmp/app.jar

Step 5: Enter and upload docker-demo
the prepared docker-demofiles to any directory of the virtual machine, and then enter docker-demothe directory

Step 6: Run the command:

 
 

Copy code

  1. docker build -t javaweb:1.0 .

Finally visit  http://192.168.150.101:8090/hello/count, change the ip to your virtual machine ip
3.3.2. Build a Java project based on java8
. Although we can add any installation package we need and build the image based on the Ubuntu base image, it is more troublesome. So in most cases, we can make modifications on some basic images with some software installed.

For example, to build a Java project image, you can build it based on the prepared JDK base image.

Requirements: Build a Java project as a mirror based on the java:8-alpine mirror

The implementation idea is as follows:

① Create a new empty directory, then create a new file in the directory and name it Dockerfile

② Copy the docker-demo.jar provided in the pre-course materials to this directory

③ Write Dockerfile:

a) Based on java:8-alpine as the base image

b) Copy app.jar to the image

c) Exposed port

d) Write entry ENTRYPOINT

The content is as follows:

 
 

Copy code

  1. FROM java:8-alpine
  2. COPY ./app.jar /tmp/app.jar
  3. EXPOSE 8090
  4. ENTRYPOINT java -jar /tmp/app.jar

④ Use docker buildthe command to build the image

⑤ Use docker runto create a container and run it

3.4. Summary

summary:

  • The essence of a Dockerfile is a file that describes the image building process through instructions.
  • The first line of Dockerfile must be FROM to build from a base image
  • The base image can be a base operating system, such as Ubuntu. It can also be a mirror made by others, for example: java:8-alpine

Chapter 4 Docker-Compose- is not the focus, the focus can be learning K8S

Docker Compose can help us quickly deploy distributed applications based on Compose files without having to manually create and run containers one by one!
 

uploading.4e448015.gif

Uploading...re-upload canceled

4.1. First introduction to DockerCompose

A compose file is a text file that defines through instructions how each container in the cluster should run. The format is as follows:

 
 

Copy code

  1. version: "3.8"
  2. services:
  3. mysql:
  4. image: mysql:5.7.25
  5. environment:
  6. MYSQL_ROOT_PASSWORD: 123
  7. volumes:
  8. - "/tmp/mysql/data:/var/lib/mysql"
  9. - "/tmp/mysql/conf/hmy.cnf:/etc/mysql/conf.d/hmy.cnf"
  10. web:
  11. build: .
  12. ports:
  13. - "8090:8090"

The Compose file above describes a project, which contains two containers:

  • mysql: a container built based on the mysql:5.7.25 image and mounted with two directories
  • web: an image container temporarily built based on docker build, with port 8090 mapped

For detailed syntax of DockerCompose, please refer to the official website: https://docs.docker.com/compose/compose-file/

In fact, a DockerCompose file can be regarded as docker runwriting multiple commands to a file, but the syntax is slightly different.

4.2. Install DockerCompose

Refer to the pre-course materials, omitted for now

4.3. Deploy microservice cluster

Requirement: Deploy the previously learned cloud-demo microservice cluster using DockerCompose

Implementation ideas:

① Check the cloud-demo folder provided in the pre-class materials. The docker-compose file has been written in it.

② Modify your cloud-demo project and name the database and nacos addresses as the service names in docker-compose

③ Use the maven packaging tool to package each microservice in the project as app.jar

④ Copy the packaged app.jar to each corresponding subdirectory in cloud-demo

⑤ Upload cloud-demo to the virtual machine, use docker-compose up -d to deploy the
4.3.1.compose file
. Check the cloud-demo folder provided in the pre-class materials. The docker-compose file has been written in it, and each Microservices have prepared an independent directory:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
The content is as follows:

 
 

Copy code

  1. version: "3.2"
  2. services:
  3. nacos:
  4. image: nacos/nacos-server
  5. environment:
  6. MODE: standalone
  7. ports:
  8. - "8848:8848"
  9. mysql:
  10. image: mysql:5.7.25
  11. environment:
  12. MYSQL_ROOT_PASSWORD: 123
  13. volumes:
  14. - "$PWD/mysql/data:/var/lib/mysql"
  15. - "$PWD/mysql/conf:/etc/mysql/conf.d/"
  16. userservice:
  17. build: ./user-service
  18. orderservice:
  19. build: ./order-service
  20. gateway:
  21. build: ./gateway
  22. ports:
  23. - "10010:10010"

As you can see, it contains 5 services:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
Check the mysql directory, you can see that the cloud_order and cloud_user tables have been prepared:

Check the microservice directory, you can see that they all contain Dockerfile files:

the content is as follows:

 
 

Copy code

  1. FROM java:8-alpine
  2. COPY ./app.jar /tmp/app.jar
  3. ENTRYPOINT java -jar /tmp/app.jar

4.3.2. Modify the microservice configuration
because microservices will be deployed as docker containers in the future, and the interconnection between containers is not through IP addresses, but through container names. Here we modify the mysql and nacos addresses of order-service, user-service, and gateway services to access based on container names.

As follows:

 
 

Copy code

  1. spring:
  2. datasource:
  3. url: jdbc:mysql://mysql:3306/cloud_order?useSSL=false
  4. username: root
  5. password: 123
  6. driver-class-name: com.mysql.jdbc.Driver
  7. application:
  8. name: orderservice
  9. cloud:
  10. nacos:
  11. server-addr: nacos:8848 # nacos service address

4.3.3. Packaging
Next, we need to package each of our microservices. Because the jar package name in the Dockerfile we saw before is app.jar, each of our microservices needs to use this name.

This can be achieved by modifying the packaging name in pom.xml. Each microservice needs to be modified:

 
 

Copy code

  1. <build>
  2. <!-- The final name of the service package-->
  3. <finalName>app</finalName>
  4. <plugins>
  5. <plugin>
  6. <groupId>org.springframework.boot</groupId>
  7. <artifactId>spring-boot-maven-plugin</artifactId>
  8. </plugin>
  9. </plugins>
  10. </build>

After packaging:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
4.3.4. Copy the jar package to the deployment directory
. The compiled and packaged app.jar file needs to be placed in the same directory as the Dockerfile. Note: The app.jar of each microservice is placed in the directory corresponding to the service name, don’t make a mistake.

user-service:
 

uploading.4e448015.gif

Uploading...Re-upload Cancel
order-service:

gateway:

4.3.5. Deployment
Finally, we need to upload the entire cloud-demo folder to the virtual machine for DockerCompose deployment.

Upload to any directory:
 

uploading.4e448015.gif

Uploading...Re-upload to cancel
Deployment:
Enter the cloud-demo directory, and then run the following command:

 
 

Copy code

  1. docker-compose up -d

5.Docker image warehouse

5.1. Build a private mirror warehouse

Refer to the pre-course material "CentOS7 Installation Docker.md", which will be omitted for now.

5.2. Push and pull images

To push an image to a private image service, you must first tag it. The steps are as follows:

① Retag the local image, and the name prefix is ​​the address of the private warehouse:192.168.150.101:8080/

 
 

Copy code

  1. docker tag nginx:latest 192.168.150.101:8080/nginx:1.0

② Push the image

 
 

Copy code

  1. docker push 192.168.150.101:8080/nginx:1.0

③ Pull the image

 
 

Copy code

  1. docker pull 192.168.150.101:8080/nginx:1.0

appendix

Markdown file and ppt tutorial file download:

Guess you like

Origin blog.csdn.net/liuwfeii/article/details/125765900