Docker of network management (configuration for communication between the container)

Bowen outline:

  • A, Bridge mode (communication between the container on the same server Docker)
  • Second, the deployment consul Docker container service implementation across a host of communication

Preface:

When you begin large-scale use Docker, you will find that you need to know a lot of knowledge about the network. Docker As a lightweight container technology is currently the most fire, there are many commendable features, such as the Docker's image management. However, Docker also has many imperfections, Docker networks is relatively weak part. Therefore, we need to understand network knowledge Docker, in order to meet the higher network requirements. This paper introduces four kinds of network Docker's own work, and then introduce some custom network mode.


When we install Docker, it will automatically create three networks, bridge (default create container connected to this network), none, host.

  • host: container will not be their own virtual network card, configure your own IP, etc., but the use of IP and port of the host.
  • None: The network function mode closed container, the equivalent of a loop network.
  • Bridge: This mode for each dispensing container, such as IP set, and the vessel is connected to a virtual bridge called docker0, configuration and host bridge and communicate docker0 Iptables nat table.
[root@docker ~]# docker network ls    #执行该命令查看docker创建的网络

Three networks mentioned above are explained as follows:

  • Host: Vmware corresponds to the bridging mode, and a host on the same network, but no separate IP address. As we all know, Docker use of Linux Namespaces technology to isolate resources, such as PID Namespace isolation process, Mount Namespace isolation File System, Network Namespace isolation networks. A Network Namespace provides an independent network environment, including network cards, routing, Iptable rules and so on with other Network Namespace isolation. A Docker container typically assigned a unique Network Namespace. However, if the start time of the container using host mode, the container will not get a separate Network Namespace, but the host and shared a Network Namespace. Virtual container will not be out of your network card, configure your own IP, etc., but the use of IP and port of the host. Host-based containers mode starts, the information in the container on the implementation of the ifconfig, see are the host. This mode is not flexible enough, port conflict prone.
  • None: placing the container in the mode of its own network stack, but does not perform any configuration. In fact, the model container closed network function, similar to the address will change in the following two cases are useful: container does not require a network (for example, only need to write batch jobs disk volume).
  • overlay: As the name suggests: cover, but it is not covered, it is the role of the network on the basis of the original container, add a network card, and assign an IP address, you can associate all the docker container to the same local area network , the vessel is a container suitable for communication across a host of scenarios.
  • Bridge: Vmware equivalent in NAT mode, the container using a separate network Namespace, and is connected to docker0 virtual NIC (default mode). Docker by bridges and IPtables nat table arranged to communicate with the host; Docker Bridge mode is the default network settings, this mode a Network nameSpace assigned to each container, such as IP set, and a Docker container is connected to a host virtual bridge docker0 on.

In a production environment, we use the most is the Bridge mode and overlay mode. This blog will be introduced around these two modes.

A, Bridge mode

When Docker server starts, it will create a virtual bridge named docker0 on the host, started on this host Docker containers will be connected to this virtual bridge. Similar virtual bridge work and physical switches, so that all the containers on the host through a switch attached to a Layer 2 network, typically Docker use 172.17.0.0/16 network segment, and the segment is assigned to docker0 bridge use (ifconfig command on the host can be seen docker0), then assign an IP address to the same network segment as the container.

The network topology as the stand-alone environment (host address 10.10.0.186/24):

Docker of network management (configuration for communication between the container)

Docker completion of the above procedure is substantially the network configuration is such that:

  • Creates a virtual network adapter on the host device veth pair. veth device always appear in pairs, which constitute a data channel, the data entered from a device, it will come out from another device. Thus, veth device used to connect two network devices.
  • Docker will end veth pair devices placed in a container in the newly created and named eth0. And the other end on the host in such similar names veth65f9 name, and join the network equipment to docker0 bridge, you can view by brctl show command.
  • Dispensed from one IP subnet to docker0 containers, and set the IP address of the default gateway docker0 container.

When all containers are created based on the default docker0, then set aside related to setting up a firewall, IPtables and other outside, in theory, each container can communicate with each other, but docker0 this network is the system comes with some features not It can be achieved, and not flexible enough.

In fact, we can also create a custom network, and you can specify which belongs to the specific network segment and so on. This is a docker 0 can not be achieved, then, if the individual containers, not based on the same network (such as Docker0) created, then? How they communicate it?

Here come some configuration, look at the mode of operation of the Bridge.

The effect achieved is as follows:

  • Based docker0 (drive name docker makes bridge) network to create two containers, namely box1, box2.
  • Create a custom network, network type bridge, named my_net1. Create two containers box3 Based on this network, box4 (unless specified network segment will use the 172.18.0.0/16 network segment, based on docker0 add a network-bit)
  • Create a custom network, network type bridge, named my_net2, designated segment is 172.20.18.0/24, create two containers box5 (ip as 172.20.18.6) Based on this network, box6 (IP is 172.20.18.8).
  • Box2 can be arranged to achieve mutual communication, box4 box5 and may communicate with each other and box3.
[root@docker ~]# docker run -itd --name box1 --network bridge busybox    
#创建一个容器box1,--network选项可以省略,默认就是bridge,这里只是为了展示命令
[root@docker ~]# docker run -itd --name box2 --network bridge busybox   #同上,这里创建一个容器box2
[root@docker ~]# docker network create -d bridge my_net1    #创建一个桥接网络,名称为my_net1
[root@docker ~]# docker run -tid --name box3 --network my_net1 busybox    #基于my_net1创建容器box3
[root@docker ~]# docker run -tid --name box4 --network my_net1 busybox   #同上,创建box4
[root@docker ~]# docker network create -d bridge --subnet 172.20.18.0/24 my_net2   #创建一个桥接网络my_net2,并指定其网段
[root@docker ~]# docker run -tid --name box5 --network my_net2 --ip 172.20.18.6 busybox   
#基于my_net2网络,创建一个容器box5,并且指定其IP地址
[root@docker ~]# docker run -tid --name box6 --network my_net2 --ip 172.20.18.8 busybox    #同上
[root@docker ~]# docker network connect my_net1 box2      #将box2连接到my_net1这个网络
[root@docker ~]# docker exec box2 ping box3   #进行ping测试,可以发现box2可以ping通box3了。
#而如果没有将box2连接到网络my_net1,是绝对不会ping通的。
PING box3 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.069 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.076 ms
[root@docker ~]# docker network connect my_net2 box4   #将box4连接到my_net2网络
#同box2和box3的ping测试,若没有将box4连接到box5所在的网络,是不可能ping通的。
[root@docker ~]# docker exec box5 ip a    #查看box5的IP地址
         .......................#省略部分内容
16: eth0@if17: <BROADCAST,MULTICAST,UP,LO500 qdisc noqueue 
    link/ether 02:42:ac:14:12:06 brd ff:ff:ff:ff:ff:ff
    inet 172.20.18.6/24 brd 172.20.18.255 scope global eth0     #确认其IP
       valid_lft forever preferred_lft forever
[root@docker ~]# docker exec box4 ping 172.20.18.6   #在box4容器上对box5的IP进行ping测试,可以ping通
PING box5 (172.20.18.6): 56 data bytes
64 bytes from 172.20.18.6: seq=0 ttl=64 time=0.090 ms
64 bytes from 172.20.18.6: seq=1 ttl=64 time=0.130 ms

After the above configuration, the final result has been achieved, should be noted that, my_net1 we can be created, my_net2 understood as a network drive switch, and execute commands docker network connect my_net1 box2, this is equivalent to the box2 container adds a piece of card, and then connected to the switch my_net1, then this container on more than one card, and the switch has an IP address my_net1. In the above configuration, not only can BOX2 and box3 communications, and also may communicate box4, because they are connected on the my_net1 "switch."

note:

  • Between the container may be used to communicate container name, but only the use of a custom network, such as the above my_net1, my_net2;
  • If at the same time creating a custom network, specify the network segment of the network, then use the container for this network can also specify the IP address of the container, if you do not specify the network segment, you can not specify the IP address of the container.

Docker of network management (configuration for communication between the container)

Docker of network management (configuration for communication between the container)

Second, the deployment consul Docker container service implementation across a host of communication

consul: meaning of the data center, it can be understood as a database, and other similar non-relational databases Redis, using the key - the value of the embodiment, the stored IP address and port information of each container.

I understand consul service is not too much, if you want to learn more about this service, or refer to other documents it, if have the opportunity, I will write it down in detail consul this service.

consul's function is very powerful, can be run in a cluster, and with health monitoring and other functions.

Here begin configuring consul service.

1, environment preparation as follows:

  • Docker server three, docker here is my version 18.09.0;
  • The first Docker IP server is 192.168.20.7, which is run consul services;
  • After two test end, only to have docker environment can be.

If you need to install the deployment Docker server, you can refer Bowen: Docker detailed configuration of the installation .

2, the first Docker server configured as follows:

[root@docker ~]# docker pull progrium/consul          #下载consul镜像
[root@docker ~]# docker run -d -p 8500:8500 -h consul --name consul --restart=always progrium/consul -server -bootstrap
#运行consul容器,该服务的默认端口是8500,“-p”:表示将容器的8500端口映射到宿主机的8500端口
#“-h”:表示consul的主机名;“--name consul”表示为该容器名;“--restart=always”表示可以随着docker服务的启动而启动;
#“-serve -bootstarp”:表示当在群集中,加上这两个选项可以使其以master的身份出现
[root@docker ~]# netstat -anput | grep 8500   #确定8500端口在监听
tcp6       0      0 :::8500                 :::* 

OK, now, consul ticket node is complete, and now switched to a second Docker server.

3, a second server Docker configured as follows:

[root@docker02 ~]# vim /usr/lib/systemd/system/docker.service  #编辑docker主配置文件
         ..............#省略部分内容,搜索“Start”定位到下面配置行,修改如下:
ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://192.168.20.7:8500 --cluster-advertise=ens33:2376
#各项解释如下:
#/var/run/docker.sock:Docker的一个编程接口
# “ -H tcp://0.0.0.0:2376 ” :使用本机的tcp2376端口;
# “ --cluster-store=consul://192.168.20.7:8500”:指定运行着consul服务的第一台docker服务器IP及端口;
# “ --cluster-advertise=ens33:2376”:从本机的ens33网卡通过2376端口搜集网络信息,存储在consul上
#修改完成后,保存退出即可。
[root@docker02 ~]# systemctl daemon-reload    #重新加载配置文件
[root@docker02 ~]# systemctl restart docker    #重启docker服务

4, then on a third docker server, as the second stage operation Docker server configuration, mainly to consul and services specified listening port. (Self-configuring, do not write here, I remember after the changes are complete, restart the docker services)

5, now using a browser to access the web page consul service, as follows:

Docker of network management (configuration for communication between the container)

Docker of network management (configuration for communication between the container)

We can see that two docker used to test server IP and other related information, as follows:
Docker of network management (configuration for communication between the container)

6, back on the second Docker server, create an overlay network:

[root@docker02 ~]# docker network create -d overlay my_olay         #创建一个名字为my_olay的voerlay网络 

7, switch to the Docker on a third server, you can see the overlay network just found on the second Docker created on the server:

[root@docker03 ~]# docker network ls     #查看docker03的网络,发现其不但有overlay网络,
#而且其SCOPE(范围)是global(全局的)
NETWORK ID          NAME                DRIVER              SCOPE
8d5b00cf07ab        bridge              bridge              local
17c053a80f5a        host                host                local
c428fc28bb11        my_olay             overlay             global
323935eaa5c3        none                null                local

In fact, now based on the second Docker server overlay network you just created to run a container, on a third Docker is also based on the overlay network server running a container, the two containers on different hosts, they can communicate, as follows:

##################第二台Docker服务器上配置如下:###########################
[root@docker02 ~]# docker run -tid --name web01 --network my_olay busybox    #基于网络my_olay运行一个容器web01
[root@docker02 ~]# docker exec web01 ip a         #查看其IP信息,发现其除了回环地址,还有两个IP
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.2/24 brd 10.0.0.255 scope global eth0     #这个地址就是my_olay给的
       valid_lft forever preferred_lft forever
11: eth1@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1  
       valid_lft forever preferred_lft forever
##################第三台Docker服务器上配置如下:###########################
[root@docker03 ~]# docker run -tid --name web02 --network my_olay busybox     #基于网络my_olay运行一个容器web02
[root@docker03 ~]# docker exec web02 ip a     #查看web02的IP
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.3/24 brd 10.0.0.255 scope global eth0            #这个地址就是my_olay给的
       valid_lft forever preferred_lft forever
11: eth1@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
       valid_lft forever preferred_lft forever
#########在第二台Docker服务器上对第三台Docker服务器上的容器进行ping测试##########
[root@docker02 ~]# docker exec web01 ping web02      #确定可以ping通
PING web02 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=1.091 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=1.007 ms

As
Docker of network management (configuration for communication between the container)
follows: -------- end of this article so far, thanks for reading --------

Guess you like

Origin blog.51cto.com/14154700/2443760