4. Docker container technology

Course content

  • DevOps and cloud native
  • Docker basic commands
  • Docker installation software
  • Docker project deployment

1. DevOps and cloud native

1. Pain points of microservices

Let’s take a look at our microservice architecture. Each component requires a server to deploy, and the total may require dozens or even hundreds of servers.

What problems will such a microservice project encounter in the deployment?

  • It requires a lot of servers. The procurement, installation and wiring of servers are very troublesome.
  • Each service requires: compilation, building, packaging, testing, publishing, repeated operation and maintenance workload.
  • Troublesome operation (compilation, packaging, testing, publishing, online, etc. are all troublesome)
2.What is DevOps

Problem: The iterative process of software includes design, coding, compilation, construction, testing, release, operation and maintenance, etc. The early software development model was waterfall development. This development model iterative update is too slow, and each environment requires It consumes a lot of manpower and time. It often takes a long time to iterate. Today's enterprises are pursuing agile development: rapid development and rapid iteration, and shortening the software development life cycle as much as possible.

DevOps emphasizes how efficient organizational teams can complete software lifecycle management through automated tool collaboration and communication, thereby delivering more stable software faster and more frequently.

DevOps is a method or concept that covers the entire process of development, testing, and operation and maintenance. DevOps is a method and process to improve the quality of communication and collaboration between software development, testing, operation and maintenance, operations and other departments. DevOps emphasizes effective communication and collaboration between software developers and software testing, software operation and maintenance, and quality assurance (QA) departments. Collaboration emphasizes the use of automated methods to manage software changes and software integration, making software construction, testing, and release faster and more reliable, and ultimately delivering software on time.

DevOps is a set of solutions for projects from development to operation and maintenance, rather than a specific technology. DevOps needs to integrate a series of technologies to achieve, such as: pulling code through Git, cleaning and compiling the project through Mavn commands, Packaging, testing, etc., image building through Docker commands...

Technologies used in DevOps include: GitHub, Git/SVN, Docker, Jenkins, Hudson, Ant/Maven/Gradle (compilation, packaging), Selenium (automated testing), QUnit, JMeter (performance testing), etc.

image.png

The following popularizes 2 concepts

  • CI: Continuous Integration: The core meaning of continuous integration and continuous delivery: to release the final product to the online environment as soon as possible for users to use. Continuous delivery is the goal pursued by every enterprise. The CD in CI/CD that we often refer to generally refers to continuous delivery.
  • CD: Continuous Deployment: Continuous deployment is based on continuous delivery, automating the process of compilation, testing, packaging and deployment to the production environment.
3. What is agile development?

The traditional waterfall development model requires that all functions of a project version be developed before version iteration is carried out. This development model not only slows project version iteration, but also fails to detect software quality problems as early as possible and ensure continuous project delivery.

The most important goal of agile development is to satisfy customers by delivering valuable software early and continuously; to achieve this goal requires more frequent version iterations, including: compilation, packaging, testing, and operation and maintenance. And it should be completed automatically to achieve a faster software development life cycle.

The key difference between Agile and DevOps is that Agile is a philosophy about how to develop and deliver software, while DevOps describes how to continuously deploy code through the use of modern tools and automated processes.

4. What is cloud native

The definition of cloud native was first proposed by Matt Stine of Pivotal in 2013. Cloud native is based on distributed deployment and unified operation and managementDistributed cloud [1], based on containers, Microservices, DevOpstechnology product system based on 6> and other technologies. It has: **DevOps, continuous delivery, microservices, containers, ** and other features.

Cloud native technology helps organizations build and run elastically scalable applications in new dynamic environments such as public cloud, private cloud, and hybrid cloud. Representing cloud-native technologies include containers, service meshes, microservices, immutable infrastructure, and declarative APIs. These techniques enable building loosely coupled systems that are fault tolerant, easy to manage, and easy to observe. Combined with reliable automation, cloud-native technologies make it easy for engineers to make frequent and predictable major changes to systems.

2. Getting started with Docker basics

1. Get to know docker

Virtual Technology
In computers, virtualization (English: Virtualization) is a resource management technology that combines various physical resources of the computer, such as servers, networks, memory and Storage, etc., are abstracted, converted and presented to break the inseparable barriers between physical structures, allowing users to apply these resources in a better way than the original configuration. Virtualization technology is mainly used to solve the problem of high-performance physical hardware. Reorganization and reuse of overcapacity and old hardware with low capacity

Docker
Docker is an open source application容器engine that allows developers to 打包 their applications and dependencies Package it into a portable package and publish it to any popular Linux or Windows machine, which can also be virtualized. Containers completely use the sandbox mechanism and will not have any interfaces with each other镜像

docker to features:

  • Quick to get started: Easy to install and use, just use commands
  • Logical classification of responsibilities: programmers care about how to write programs in containers, and operation and maintenance care about how to manage docker containers.
  • Fast and efficient development life cycle: rapid deployment
  • Encourage the use of service-oriented architecture: born for microservices
  • Solve the problem of inconsistent environments: Docker can package the environment required by the project into the image, and the environments of multiple containers started by the same image will be consistent.
2.Docker working ideas
2.1.docker image

The image is the cornerstone of building Docker. Users run their own containers based on images. Images are also the "build" part of the Docker life cycle. The image is a layered structure based on the joint file system, which is built step by step through a series of instructions
Simple understanding: installing the system requires an IOS image, installing the software requires a software installation package, and running docker Containers require corresponding images. For example, to start an Nginx container, you need to download a Docker image from the image warehouse and then run it into a container.

2.2.Registry (Registration Center/Mirror Warehouse)

Docker uses Registry to save images built by users. Registry is divided into two types: public and private. Docker operates a public registry called Docker Hub. Users can register an account on Docker Hub, share and save their own images (Note: Downloading images on Docker Hub is extremely slow, so you can build your own private registry).

2.3.images local warehouse

Docker's local warehouse for storing images. Remotely downloaded images are stored in the local warehouse. If: Maven's local warehouse

2.4.docker container

Docker can help you build and deploy containers. You only need to package your applications or services into containers. Containers are started based on images, and one or more processes can run in the container. What is running in the container is a project or software. There is a mini centos in the container. The container needs to be run before it can run.

2.5.Docker client and server

Docker is a client-server (C/S) architecture program. The Docker client only needs to make a request to the Docker server or daemon, and the server or daemon will do all the work and return the results. Docker provides a command line tool Docker and a complete set of RESTful APIs. You can run the Docker daemon and client on the same host, or you can connect from a local Docker client to a remote Docker daemon running on another host.
1636514404153.png

3.docker installation

Docker officially recommends installing it in Ubuntu, because Docker is released based on Ubuntu, and generally Ubuntu is the first to update or patch Docker problems. In many versions of CentOS, updating to the latest patch packages is not supported. 7 latest version

Step 1: Add docker’s yum library

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Step 2: Install docker

sudo yum -y install docker-ce

Step 3: Start docker

sudo systemctl start docker

Step 4: Image acceleration: By default, downloading images from foreign countries will be slower. The Alibaba Cloud image library is used below.

vi /etc/docker/daemon.json

加入下面的内容

{
    
    
  "registry-mirrors": ["https://5pfmrxk8.mirror.aliyuncs.com"]
}

The following is the command to operate Docker: systemctl The command is a system service manager command. It is a combination of service and chkconfig.

  • Start docker: systemctl start docker
  • stop docker: systemctl stop docker
  • Restart docker: systemctl restart docker
  • Check docker status: systemctl status docker
  • Start up: systemctl enable docker
  • View docker summary information: docker info
  • View the docker help documentation: docker --help
4.docker image operation

Images are the cornerstone of Docker. Users run their own containers based on images. Images are also the "build" part of the Docker life cycle. The image is a layered structure based on the joint file system, which is built step by step through a series of instructions.

4.1. View local mirror
docker images
  • REPOSITORY: The name of the warehouse where the image is located
  • TAG: Mirror tag
  • IMAGE ID: Image ID
  • CREATED: The date the image was created (not the date the image was obtained)
  • SIZE: Image size

These images are stored in the /var/lib/docker directory of the Docker host.

4.2. Delete image

Using name:version is the same as using id

docker rmi 镜像id/镜像名:版本
  • docker rmi id: delete the specified image
  • docker rmi docker images -q: delete all images
4.3.Search for remote mirrors

If you need to find the required image from the network, you can search with the following command

docker search 镜像名
  • NAME: warehouse name
  • DESCRIPTION: Image description
  • STARS: user reviews, reflecting the popularity of an image
  • OFFICIAL: Is it official?
  • AUTOMATED: Automatically built, indicating that the image is a 63.4 pull image created by the Docker Hub automatic build process

Note: We can find the image and version through hub.docker.com

4.4. Pull the image

Due to national conditions, it is relatively slow to download Docker HUB official related images in China. You can use some domestic image accelerators (docker.io). The image should be consistent with the official image. The key is speed, which is recommended.

docker pull 镜像名:版本
5.docker container operation

Docker can help you build and deploy containers. You only need to package your applications or services into the container. Containers are started based on images, and one or more processes can run in the container.

5.1.Create container

Description of commonly used parameters for creating containers:

  • Create container command: docker run
  • -i: indicates running the container
  • -t: Indicates that after the container is started, it will enter the container command line (interactive). After adding these two parameters, you can log in when the container is created. That is, allocate a pseudo terminal. Note: In an interactive container, if you execute "exit", the container will end. If you use "ctrl + p + q", the container will not exit.
  • -d: Add the -d parameter after run, and a guard container will be created to run in the background (in this way, the container will not be automatically logged in after the container is created. If you only add the -i -t two parameters, it will automatically log in after creation. container). Note: Executing "exit" in the background container will end the container
  • –name: Name the created container.
  • -p: indicates port mapping, the former is the host port, and the latter is the mapped port in the container. You can use multiple -p to do multiple port mappings
  • -v: Indicates the directory mapping relationship (the former is the host directory, the latter is mapped to the directory on the host), you can use multiple -v to map multiple directories or files. Note: It is best to do directory mapping, make modifications on the host, and then share it to the container.
  • –restart: Specify the restart mode when the container exits abnormally. always always restarts, no does not restart, on-failure - only restarts the container when it exits with a non-zero status.
  • -m: Specify the total allocated memory of the container, for example: -m 2048M, the maximum available memory of the container is 2G

Interactive container example: An interactive container provides a pseudo terminal

docker run -i -t --name=容器名字 -v=/外面目录:/里面目录 镜像:版本  /bin/bashsh

Background container (daemon container)

 docker run -i -d --name=容器名字  -v=/外面目录:/里面目录  镜像:版本

Note: Do not use -t but -d for guarded containers, do not use /bin/bash

5.2. View containers

View all containers

docker ps -a

View exited containers

 docker ps -f status=exited

View the last running container

docker ps – l

Effect example

5.3. Enter the container

Sometimes we need to enter the container to check the situation of the container

docker exec -i -t 容器名字 /bin/bash

Note: -t is required to enter the container, the container name is direct, and /bin/bash is required

5.4.Exit the container

Violent exit: exit
Note: exit will end the interactive container, not the daemon container. Friendly exit: ctrl + p + q

5.5. Closing the container

kill can shut down the container faster than stop

 docker stop 容器名/容器id   或   docker  kill 容器名/容器id
5.6. Container startup

Stopped containers can be restarted using start

 docker start 容器名/容器id
5.7. Deletion of containers

Please pay attention to distinguish the deletion of containers and images. Deleting containers is docker rm, and deleting images is docker rmi.

docker rm 容器名/容器id

Delete all containers: docker ps -a -q refers to getting the IDs of all containers and then deleting them in batches

docker rm `docker ps -a -q`
5.8. View containers
docker inspect 容器名

Check the IP: The container has an automatically assigned IP, but the IP of the container will change after restarting, so it is generally not used.

 docker inspect --format='{
    
    {.NetworkSettings.IPAddress}}' mycentos2
5.9.File copy

File sharing between the host and the container can be achieved through -v directory mapping, but sometimes flexible file copying is required and can be accomplished using docker cp, as follows

拷贝进去
 docker cp 需要拷贝的文件或目录 容器名称:容器目录
拷贝出来
 docker cp 容器名称:容器目录 需要拷贝的文件或目录
6.docker installation software
6.1. Create Mysql container
docker run -i -d --name=mysql -p=3306:3306 -e MYSQL_ROOT_PASSWORD=itsource123456 mysql:5.7
  • -e MYSQL_ROOT_PASSWORD: specifies the password of the root user
6.2. Create Nginx container
docker run -i -d --name=nginx -p=80:80 -v=/usr/local/nginx/html:/usr/share/nginx/html nginx
  • -v=/usr/local/nginx/html:/usr/share/nginx/html: Directory mapping, we only need to put the html page in /usr/local/nginx/html and it will be automatically synchronized to the container
6.3. Create redis container
docker run -di --name=redis -p=6379:6379  redis --requirepass "123456" 
  • –requirepass “123456” : Specify password

3. Docker build image

Containers are started from images. If we want to install a certain software, we need to download the corresponding image. We can also build our own image. There are three ways to build an image:

  • Build an image based on an existing container
  • dockerfile build-script build
  • maven plug-in build-also converted to dockerfile

1636522076774.png

1.Build based on containers
docker commit 容器名 新的镜像名:版本
2.Build based on dockerfile
2.1.Understand Dockerfile

A Dockerfile is a text document containing commands for assembling an image. Any command can be called from the command line. Docker automatically generates images by reading the instructions in the Dockerfile. The docker build command is used to build an image from a Dockerfile. You can use the -f flag with the docker build command to point to the Dockerfile anywhere in the file system. The instructions for Docker are as follows
1636522258215.png
Let’s take building a JDK1.8 image as an example. Requirements: Use Dockerfile to build a jdk8 image based on centos:7

2.2. Build the image of JDK1.8

Step 2: Create a directory and name it:

mkdir /root/dockerfile
cd /root/dockerfile

Step 2: Then upload jdk-8u171-linux-x64.tar.gz to linux from window to the /root/dockerfile directory of linux

Step 3: vi Dockerfile, the content is as follows

FROM centos:7
MAINTAINER itsource
WORKDIR /usr
RUN mkdir /usr/local/java
ADD jdk-8u171-linux-x64.tar.gz  /usr/local/java
ENV JAVA_HOME /usr/local/java/jdk1.8.0_171
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/bin/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
ENV PATH $JAVA_HOME/bin:$PATH
  • FROM : Use centos:7 as the base image
  • Created a /usr/local/java directory
  • Copy the jdk installation package to this directory and automatically decompress it
  • Configure Java environment variables

Step 5: Build the image. Docker will go to the current directory to automatically find a script named Dockerfile, and then execute the script sentence by sentence to build the image.

[root@VM-0-10-centos ~]# docker build -t="jdk:1.8" ./
  • docker build: build image
  • -t: Specify the name of the image
3. Project packaging image
3.1. Open docker remote port

The essence of using Maven's Docker plug-in to build an image is to use a Dockerfile to build an image, but the Docker plug-in will automatically create a Dockerfile script file for us. It uses script commands to build our project (project. jar) and package it to form a new image.
1636523100978.png
To achieve the above functions, you need to enable Docker remote access to build the image remotely on the development machine. You also need to install Maven's docker plug-in.
For a large number of microservices, manual deployment (Dockerfile) is undoubtedly very troublesome and error-prone. So here we learn how to automatically mirror, which is also a method often used in actual enterprise development.

Step one: Open docker remote port [Danger]

[root@VM-0-10-centos ~]# vi /lib/systemd/system/docker.service

Step 2: Add configuration -H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock
1636523188548.png
after ExecStart= and then reload Configure, restart Docker

[root@VM-0-10-centos ~]# systemctl daemon-reload 	//重新加载docker配置
[root@VM-0-10-centos ~]# systemctl restart docker 	//重启docker

Turn on firewall

[root@VM-0-10-centos ~]# firewall-cmd --permanent --zone=public --add-port=2375/tcp
[root@VM-0-10-centos ~]# systemctl restart firewalld

Access test: Server IP: 2375

3.2. Project installation docker plug-in

Use the docker plug-in to automatically build the Dockerfile, then build the image based on docker, and directly package the microservice into an image and push it to the private server

<build>
  <finalName>pethome</finalName>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
    <!--docker的maven插件,官网: https://github.com/spotify/docker‐maven‐plugin-->
    <plugin>
      <groupId>com.spotify</groupId>
      <artifactId>docker-maven-plugin</artifactId>
      <version>0.4.13</version>
      <configuration>
        <!--镜像名 PetHome:1.0-->
        <imageName>${project.artifactId}:${project.version}</imageName>
        <!--基础镜像-->
        <baseImage>jdk:1.8</baseImage>
        <entryPoint>["java", "-jar", "/${project.build.finalName}.jar"]</entryPoint>
        <resources>
          <resource>
            <targetPath>/</targetPath>
            <directory>${project.build.directory}
            </directory>
            <include>${project.build.finalName}.jar</include>
          </resource>
        </resources>
        <!--docker远程主机-->
        <dockerHost>http://ip:2375</dockerHost>
      </configuration>
    </plugin>
  </plugins>
</build>
3.3.Build the image

Use the idea terminal, enter the directory where pom.xml is located, and run

mvn clean package -Dmaven.test.skip=true docker:build

1636523248156.png
Note: Reasons for unsuccessful operation

  • Docker does not enable remote access, or does not restart docker
  • The firewall does not open port 2375
  • Plug-in configuration error, please check carefully

4. Docker container communication

Different containers have different applications installed, and applications need to communicate. For example, the container where the project is located needs to be connected to the Redis container. There are many ways to communicate between containers.

1. Use IP communication

Containers can communicate with each other using the container's IP address by default, but when docker is restarted, the IP address will change. Check the ip as follows:

docker inspect 容器 | grep  IPAddress
2. Use port mapping

Port mapping exposes docker to the outside, and the external network can access it through the ip: mapped port. Some applications should not be exposed to the external network, such as: redis, mysql should not be exposed, but for the convenience of operation, it can also be exposed for easy import. sql

3. Use link communication

When starting a container, use link to specify a "link name" for the target container to communicate. You can use the "link name" in the container to communicate with the target container. Format: --link target container: alias, for example: a container wants to link to a container named mysql, configure as follows:

Create a Mysql container with the name: mysql

Docker run -id --name=mysql -p=3306:3306 mysql:5.7

Create a project container named: pethome,

  • Connect the mysql container through link, and the link name is also: mysql
  • Connect the redis container through link, and the link name is also: redis
docker run -di --name=myproject 
--link mysql:mysql --link redis:redis  pethome:1.0-SNAPSHOT

Then you can connect like this in the configuration file in the project

url: jdbc:mysql://mysql:3306/drive-order?serverTimezone=Asia/Shanghai&characterEncoding=utf8
4. Create a bridge network

Another way is to create a bridge network, as follows:

docker network create -d bridge mybridge #创建自己的bridge

Container joins the network

docker run -d --name box5 --network mybridge busybox /bin/sh -c "while true;do sleep 3600;done"


docker run -d --name box6 --network mybridge busybox /bin/sh -c "while true;do sleep 3600;done"

Using your own created bridge, the two containers will be automatically linked.

Guess you like

Origin blog.csdn.net/u014494148/article/details/133636791