Docker development practice notes three

1. Container Network Foundation

As a container hosted on the host host, we have to find a way to make it accessible to the external network so that we can use the services it provides. When docker starts, it creates a virtual network interface called docker0 on the host. Under linux, you can use the ifconfig command to view, and use ipconfig under windods to view.

1.1 Expose network ports

When running a network application in Docker, to allow external access, you need to specify the port mapping through the -P or -p parameter. Implementing port exposure through port mapping is the basic method for containers to provide services to the outside world.

-P (uppercase): Docker will randomly assign an unused port within 49000~49900 to the application on the host host, and map it to the container 's open network port (that is, the port configured by EXPOSE).

Here is an example of an official training program:

docker run -d -P training/webapp python app.py

使用下面的命令可以查看到运行分配的端口号。
docker ps

The above 32768 is the randomly assigned port, which may be different each time it runs, and 5000 is the port exposed by the container. In this way, the service can be accessed through http://localhost:32768.

-p (lowercase): It can specify that the port on the host host is mapped to the open port specified inside the container. The formats are as follows:

ip:hostPort:containerPort //The specified ip and port are bound to the port opened by the container

docker run -d -p 192.168.0.1:8000:5000 training/webapp python app.py

ip::containerPort //The random port of the specified ip is bound to the port opened by the container

docker run -d -p 192.168.0.1::5000 training/webapp python app.py
//相当于省略了端口号所有两个冒号连在一起了

hostPort:containerPort //The specified ports of all network interfaces on the host host will be bound (for example, you can use localhost, LAN ip, hostname, etc. plus port to access services).

docker run -d -p 8000:5000 training/webapp python app.py

All configuration information of the container can be viewed with the following command:

docker inspect 容器ID或名称

 

2. Data volume

Data in Docker can be stored in a medium similar to a virtual machine disk, which is called a data volume in Docker. Data volumes can be used to store Docker application data, and can also be used to share data between Docker containers.
The form of the data volume presented to the Docker container is a directory, which supports sharing among multiple containers, and the modification will not affect the image. Using Docker's data volume is similar to using mount to mount a file system in the system.

A data volume is a special directory that can be used by one or more containers for the following purposes.

1) Bypass the "copy write" system to achieve the performance of local disk IO, (for example, running a container, modifying the content of the data volume in the container will directly change the content of the data volume on the host, so it is a local disk IO performance, instead of writing a copy in the container first, and finally copying the modified content in the container for synchronization.)
2) Bypassing the "copy-write" system, some files do not need to be packaged into the image in docker commit document.
3) Data volumes can share and reuse data between containers
4) Data volumes can share data between hosts and containers (share a single file, or sockets)
5) Data volume data changes are directly modified
6) Data volumes are persistent Sexually until there are no containers to use them. Even if the initial data volume container or the data volume container in the middle layer is deleted, as long as other containers use the data volume, the data in it will not be lost.
 

2.1 Create and mount data volumes

There are two ways to create a data volume, as follows:

1. In the Dockerfile, use the VOLUMN command, such as: VOLUME /var/lib/mysql

2. When using docker run on the command line, use the -v parameter to create a data volume and mount it into the container

docker run -d -P -v /webapp /training/webapp python app.pyp 

The above only defines a data volume of /webapp, and does not specify a directory on the host machine. Docker will automatically assign a directory with a unique name to the data volume.


1) A data volume is a specially designated directory that uses the container's UFS file system to provide some stable features or data sharing for the container. Data volumes can be shared among multiple containers .
2) To create a data volume, just follow the -v parameter to the docker run command to create a data volume. Of course, you can also create multiple data volumes with multiple -v parameters. After creating a container with a data volume ,
   you can mount the data volume in other containers through the --volumes-froms parameter, regardless of whether the container is running or not. You can also add one or more data volumes through the VOLUME command in the Dockerfile.
3) If there is some data that you want to share among multiple containers, or you want to use the data in some temporary containers, then the best solution is to create a data volume container and mount it from the temporary container The data for this data volume container.
   In this way, even if the first data volume container at the beginning or the data volume container in the middle layer is deleted, the data volume will not be deleted as long as other containers use the data volume.
4) Commands such as docker export, save, and cp cannot be used to back up the contents of the data volume, because the data volume exists outside the image. The backup method can be to create a new container, mount the data volume container, and mount a local directory at the same time,
   and then back up the data volume of the remote data volume container to the mapped local directory through the backup command. As follows:
   # docker run -rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
5) You can also mount the directory of a local host as a data volume on the container, which is also followed by the -v parameter of docker run, but the -v is no longer a separate directory, it is [host-dir ]:[container-dir]:[rw|ro] This format,
   host-dir is an absolute path address , if host-dir does not exist, docker will create a new data volume, if host-dir exists, But if it points to a directory that does not exist, docker will also create the directory and use it as the data source.

docker run -d -P --name webapp -v `pwd`:/webapp training/webapp python app.py 
//用pwd获取当前绝对路径

Note: If the /webapp directory already exists inside the container, its contents will be overwritten after the host directory is mounted. (But generally don't write important data directly in the container, right!)

It should be noted that Dockerfile does not support mounting a local directory to a data volume, that is to say, you cannot use the VOLUME command in the Dockerfile to mount a local directory to a data volume, mainly because the directory formats of different operating systems are different. In order to ensure the portability of Dockerfile, this method is not supported, and can only be mounted in the form of the -v parameter.

The operation permission of the directory can be followed after the mount directory, and the default is RW permission. Use the following format:

docker run -d -P -v `pwd`:/webapp:ro training/webapp python app.py
docker run -d -P -v `pwd`:/webapp:rw training/webapp python app.py

In addition to mounting the host directory, you can also mount the host's files as data volumes:

docker run --rm -it -v d:/test.txt:/test.txt ubuntu:latest /bin/bash  
//挂载的这个文件必须要存在,否则docker会创建同名的目录,比如上面的test.txt文件不存在时会创建一个test.txt的目录。

 

2.1 Data Volume Container

As the name suggests, a data volume container refers to a container dedicated to mounting data volumes for reference and use by other containers. It is mainly used when multiple containers need to get data from one place. In actual operation, the data container needs to be named. After the container name is determined, other containers that depend on it can refer to its data volume through --volumes-from.

First, create a data volume container named test_dbdata, and create a new data volume /dbdata for the container. The specific operations are:

docker run -d -v /dbdata --name test_dbdata training/postgres

Then create a container db1 that references the data volume of test_dbdata. The specific operations are:

docker run -d --volumes-from=test_dbdata --name db1 training/postgres

Container mount information can be viewed through docker inspect test_dbdata/db1.

From the inspect information, it can be seen that their data volumes are the same. It should be noted that once a data volume is declared, its life cycle has nothing to do with the container that declared it. When the container declaring it is stopped, the data volume dependency exists, unless all containers referencing it are removed and the data volume is explicitly removed. Additionally, a container does not require the data volume container to be running when it refers to a data volume container . A data volume container can be referenced by multiple containers.

docker run -d --volumes-from=test_dbdata --name db2 training/postgres

In addition, data volume containers can also cascade references. For example, create a new container db3, which can reference the data volume of db1.

docker run -d --volumes-from=db1 --name db3 training/postgres

Stopping and deleting a container does not result in the deletion of the data volume itself, whether it is the container that claimed the data volume or the container that subsequently references the data volume. If you need to delete the data volume, you need to delete all the containers that depend on it, and add the -v flag when deleting the last dependent container. Here, if test_dbdata, db1, and db2 have already been mailed, the data volume can be deleted by adding the -v parameter when deleting db3.

docker rm -v db3

2.2 Backup and recovery of data

Data volume containers can also be used to back up and restore data.

1. Backup and restore

Using the data volume container, we can back up the data of a data volume container.

First, create a data volume container. The related operations are:

docker run -d -v /dbdata --name dbdata training/postgress

In this way, the data is saved to the /dbdata directory of the container. Here, the /dbdata directory needs to be backed up locally. The relevant operations are:

docker run --volumes-from dbdata -v ${pwd}:/backup ubuntu tar cvf /backup/backup.tar /dbdata

A new data volume container is created here, and the dbdata data volume container is referenced. The current directory is mounted to the new data volume container /backup directory, so that the current directory and the /backup directory are the same directory, and the data in the container Writing to the /backup directory means writing to the current directory of the host.

The recovery process is actually to decompress the data packaged by the original tar cvf to the data volume using tar xvf.

Note: In Windows, if you use the command line of git-bash to run the command, the path obtained by pwd is not recognized, and you can directly write the absolute path. The format is: /d/docker_data d means d drive. That is, the original d:/ is replaced by /d.

A very detailed article on data volumes and backup and recovery:

Docker container learning and combing--Volume data volume usage

2.3 Container connection

Earlier we used -P or -p to expose the container port for the outside world to use the container. There is another way for containers to provide services to the outside world - container connections. A container connection includes a source container and a target container: the source container is the party that provides services and provides specified services to the outside world; after the target container is connected to the source container, it can use the services it provides. The container connection depends on the container name, so when you need to use the container connection, you first need to name the container, and then use the --link parameter to connect.

The format of the connection is: --link name:alias, where name is the name of the source container and alias is the alias for this connection. Here is an example of a connection:

docker run -d --name dbdata training/postgres
上面的命令先建立了一个数据库的容器,使用下面的命令连接它。
docker run -d -P --name web --link dbdata:db training/webapp python app.py

Connection information can be viewed through docker inspect web.

In this way, the dbdata container serves the web container, but does not expose the port externally using -P or -p, which makes the source container dbdata more secure. How does the web container use the dbdata service?

Docker provides the following two ways for the target container to expose the services provided by the connection:

  • environment variable
  • /etc/hosts file

They are described separately below.

1 Environment variables

When the two containers are connected and interconnected, Docker will set the relevant environment variables in the target container so that the services provided by the source container can be used in the target container. The command format for linking environment variables is <alias>_NAME, where alias is the alias in the --link parameter. For example, the web container connects to the dbdata container, and the parameter is --link dbdata:webdb, then there is an environment variable WEBDB_NAME=/web/webdb in the container.

In general, you can use the evn command to view the environment variables of a container. The relevant code is:

docker run --rm --name web --link dbdata:webdb training/webapp env

Here is the display result:

2 /etc/hosts file

View the /etc/hosts configuration file of the target container. The specific operations are as follows:

docker run -i -t --rm --name web2 --link dbdata:webdb training/webapp /bin/bash

It can be seen that the ip address corresponding to the container connection webdb, the ip address is the address of the dbdata container, and the operation of the container to the webdb connection will be mapped to this address.

2.4 Proxy connection

The container connections mentioned above are all connections on the same host. To implement cross-host container connections, you can currently use the ambassador mode to implement cross-host connections. This mode is called proxy connection.

There are many ways to implement the agent at this stage, and some solutions are collected below (if you have the best convenience, please leave a message)

http://blog.csdn.net/shanyongxu/article/details/51398574

http://blog.csdn.net/magerguo/article/details/72123515

http://www.bijishequ.com/detail/445655?p=

http://blog.51cto.com/renyongfan/1812825

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324390829&siteId=291194637