Docker data management and communications networks

A, docker data management
in the docker, for easy viewing of the data generated within the container or plurality of containers between a data sharing, involves the operation of the data management container, the data management docker container mainly in two ways : data volume and data volume container.

1, data volume
data volume is a special directory for the use of containers, the container, the directory of the host may be mounted to the data volume, modification operations on data volume immediately visible, and does not affect the image data is updated, so that implemented migrate data between the host and the container, using the mount data volume mount operation is similar to the directory of the Linux (note: the local host is the directory is mounted to the container, for example: if the local host / data directory is mounted is / dev / sdb1, then you want to / data do when the data volume mapping, container specified directory using the file system is / dev / sdb1, I do not know to explain, you can not understand it working principle).

Mount Catalog host as a data volume for example:

Use -v option to create a data volume (only when running container, create a directory), create a data volume of the catalog of the host to mount on the data volume used to move data between the host and the container.

It should be noted that the host local directory path must be an absolute path, if path does not exist, Docker automatically creates the appropriate path.

[root@localhost ~]# docker run -d -p 5000:5000 -v /data/registry/:/tmp/registry docker.io/registry
#这是运行了一个私有仓库的容器,其中-p是端口映射的选项,这里不做解释。
# -v才是目录映射,将本地/data/registry/目录映射到容器中的/tmp/registry目录。
#然后容器中的/tmp/registry目录下的内容就和宿主机的/data/registry/内容一样了。
[root@localhost ~]# df -hT /data/registry/           #先查看本地/data/registry/ 挂载的文件系统
文件系统         类型            容量  已用  可用 已用% 挂载点
node4:dis-stripe fuse.glusterfs   80G  130M   80G    1% /data/registry
[root@localhost ~]# docker exec -it a6bf726c612b /bin/sh #进入私有仓库的容器中,该容器没有/bin/bash,所以使用的是/bin/sh。
/ # df -hT /tmp/registry/    #查看发现,该目录挂载的和宿主机挂载的文件系统是同一个,说明没问题。
Filesystem           Type            Size      Used Available Use% Mounted on
node4:dis-stripe     fuse.glusterfs
                                    80.0G    129.4M     79.8G   0% /tmp/registry

2, the container volume data
if some data needs to be shared between the container, the easiest way is to use the data volume of the container. Data volume container is an ordinary container, specifically providing data volumes to mount other container used. Use as follows: Firstly, the need to create a container as the container volume data, using the data after the data volume mount volume container with --volumes-from other containers when created.

Container volume creation and use, for example:

[root@localhost ~]# docker run -itd --name datasrv -v /data1 -v /data2  docker.io/sameersbn/bind /bin/bash
#创建运行一个容器,容器名为datasrv,并创建两个数据卷:data1和data2。
d9e578db8355da35637d2cf9b0a3406a647fe8e70b2df6172ab41818474aab08
[root@localhost ~]# docker exec -it datasrv /bin/bash     #进入创建的容器
root@d9e578db8355:/# ls | grep data             #查看是否有对应的数据卷
data1
data2
[root@localhost ~]# docker run -itd --volumes-from datasrv --name ftpsrv docker.io/fauria/vsftpd /bin/bash
#运行一个名为ftpsrv的容器,使用--volumes-from来将datasrv容器中的数据卷挂载到这个ftpsvr新容器上。
eb84fa6e85a51779b652e0058844987c5974cf2a66d1772bdc05bde30f8a254f
[root@localhost ~]# docker exec -it ftpsrv /bin/bash         #进入新创建的容器
[root@eb84fa6e85a5 /]# ls | grep data          #查看新的容器是否可以看到datasrv提供的数据卷
data1
data2
[root@eb84fa6e85a5 /]# echo " data volumes test" > /data1/test.txt       #在ftpsrv容器中向data1目录写入文件进行测试
[root@eb84fa6e85a5 /]# exit          #退出该容器
exit
[root@localhost ~]# docker exec -it datasrv /bin/bash     #进入提供数据卷的datasrv容器
root@d9e578db8355:/# cat /data1/test.txt            #可以看到刚在ftpsrv容器创建的文件,OK。
 data volumes test

Note that the production environment the most attention is the reliability of memory, and dynamically scalable storage, be sure to take this into account in this respect is even more remarkable number of GFS file system when doing data volumes, above me when just doing a simple configuration, if in a production environment, we must carefully consider, such as making it above the container mirrored volume, you can mount the GFS file system on the local host, and then create a mirrored volume container will be mounted GFS mirrored volume directory is mapped to the container, so that is a qualified mirrored volume container.

Two, docker communication network
1, the port mapping
docker provides a map of container and container port to host interconnection mechanisms to provide network services to the container.

When starting the container, if the corresponding port is not specified, the outside of the container is unable to access the service in the container through the network. Service access to the container port in the docker port mapping mechanism to provide services within the container to provide access to external networks, essentially mapping a host port to the vessel, so that the external network to access the host.

Port mapping to achieve, you need to use when running a docker run command -P (uppercase) option to achieve random mapping , Docker usually mapped to a random port access to the inside of the container port open network Port 49000 ~ 49900, but not absolute, there are exceptions to this range will not be mapped; can also be used when you run docker run command -p (lowercase) option to achieve the specified port to be mapped (often used this method).

Port Mapping Examples:

[root@localhost ~]# docker run -d -P docker.io/sameersbn/bind      #随机映射端口
9b4b7c464900df3b766cbc9227b21a3cad7d2816452c180b08eac4f473f88835
[root@localhost ~]# docker run -itd -p 68:67 docker.io/networkboot/dhcpd /bin/bash
#将容器中的67端口映射到宿主机的68端口
6f9f8125bcb22335dcdb768bbf378634752b5766504e0138333a6ef5c57b7047
[root@localhost ~]# docker ps -a     #查看发现没问题咯
CONTAINER ID        IMAGE                         COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
6f9f8125bcb2        docker.io/networkboot/dhcpd   "/entrypoint.sh /b..."   2 seconds ago       Up 1 second         0.0.0.0:68->67/tcp                                                       keen_brattain
9b4b7c464900        docker.io/sameersbn/bind      "/sbin/entrypoint...."   4 minutes ago       Up 4 minutes        0.0.0.0:32768->53/udp, 0.0.0.0:32769->53/tcp, 0.0.0.0:32768->10000/tcp   coc_gates
#此时,访问宿主机的68端口就相当于访问第一个容器的67端口;访问宿主机的32768端口,就相当于访问容器的53端口。

2, the container interconnection
container interconnection network is established a special network tunnel enabling communication between the container through the name of the container. Simply put, that is, will build a tunnel between the source and receiving vessels receiving container can see the information source container specified.

When docker run run command, using the option --link interconnected communication between the container, the following format:

--link name: alias    #其中name是要连接的容器名称,alias是这个连接的别名。

Container interconnection is performed by the name of the vessel, - name option to create a friendly name to the container, the name is unique, if you have named a vessel of the same name, when you want to use that name again, need first use docker rm command to remove a container of the same name created earlier.
Container Interconnection example:

[root@localhost ~]# docker run -tid -P --name web1  docker.io/httpd /bin/bash    #运行容器web1
c88f7340f0c12b9f5228ec38793e24a6900084e58ea4690e8a847da2cdfe0b
[[root@localhost ~]# docker run -tid -P --name web2 --link web1:web1 docker.io/httpd /bin/bash
#运行容器web2,并关联web1容器
c7debd7809257c6375412d54fe45893241d2973b7af1da75ba9f7eebcfd4d652
[root@localhost ~]# docker exec -it web2 /bin/bash     #进入web2容器
root@c7debd780925:/usr/local/apache2# cd
root@c7debd780925:~# ping web1        #对web1进行ping测试
bash: ping: command not found        #sorry,提示没有ping命令,下载一个咯
root@c7debd780925:~#apt-get update    #更新一下
root@c7debd780925:~#apt install iputils-ping     #安装ping命令
root@c7debd780925:~#apt install net-tools      #这个是安装ifconfig命令,可以不安装,我这里只是做个笔记
root@c7debd780925:~# ping web1     #再对web1进行ping测试
PING web1 (172.17.0.2) 56(84) bytes of data.
64 bytes from web1 (172.17.0.2): icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from web1 (172.17.0.2): icmp_seq=2 ttl=64 time=0.114 ms
              ..............#省略部分内容
#ping通了,所以可以说这两个容器肯定是互联的咯
#若此时又创建了web3这个新容器,要同时和web1、web2进行互联,命令如下:
[root@localhost ~]# docker run -dit -P --name web3 --link web1:web1 --link web2:web2 docker.io/httpd /bin/bash
#运行容器时,关联web1和web2。
#以下是进入web3
[root@localhost ~]# docker exec -it web3 /bin/bash
root@433d5be6232c:/usr/local/apache2# cd
#以下是安装ping命令
root@433d5be6232c:~# apt-get update
root@433d5be6232c:~# apt install iputils-ping
#以下是分别对web1,web2进行ping测试
root@433d5be6232c:~# ping web1
PING web1 (172.17.0.2) 56(84) bytes of data.
64 bytes from web1 (172.17.0.2): icmp_seq=1 ttl=64 time=0.102 ms
64 bytes from web1 (172.17.0.2): icmp_seq=2 ttl=64 time=0.112 ms
              ..............#省略部分内容
root@433d5be6232c:~# ping web2
PING web2 (172.17.0.3) 56(84) bytes of data.
64 bytes from web2 (172.17.0.3): icmp_seq=1 ttl=64 time=0.165 ms
64 bytes from web2 (172.17.0.3): icmp_seq=2 ttl=64 time=0.115 ms
              ..............#省略部分内容

Guess you like

Origin blog.51cto.com/14227204/2454438