Create a multi-host network

    Docker Engine can utilize overlay networking to create a network across hosts, but before that requires Docker Engine running in swarm mode or having a swarm of hosts that uses a key-value store.
    Swarm mode can make the overlay network only used by nodes in the swarm that need services. When creating a service that uses the overlay network, the manager nodes in the swarm automatically extend the overlay network to the nodes running the service tasks. The following example shows how to create a network and use it for a service in a manager node in a swarm:
# Create an overlay network `my-multi-host-network`.
$ docker network create \
                 --driver overlay \
                 --subnet 10.0.9.0/24 \
                 my-multi-host-network
400g6bwzd68jizzdx5pgyoe95

# Create an nginx service and extend the my-multi-host-network to nodes where
# the service's tasks run.
$ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
716thylsndqma81j6kkkb5aus


    To use the Docker engine with a key-value store, several conditions are required:
    1. Ability to access the key-value store.
    2. A host group that can connect to the key-value store.
    3. Each host in the host group has a correctly configured engine daemon.
    4. Hosts in a group must have unique hostnames, because the key-value store uses hostnames to distinguish members of a group.
    After the conditions are met, a key-value store can be created, which contains a lot of network status information, such as network, discovery, and IP addresses. Docker supports key-value stores such as Consul, Etcd, and ZooKeeper, using Consul in this example.
    1. First create a VirtualBox machine named mh-keystore:
$ docker-machine create -d virtualbox mh-keystore

       After creating a new machine, the process automatically adds the Docker engine to that host. This means that an instance of Consul can be created from the Consul image in the Docker repository without having to manually install Consul (see step 3).
    2. Set the local environment to mh-keyhost (equivalent to switching to this machine):
$ eval "$(docker-machine env mh-keystore)"

    3. Run the progrium/consul container on the mh-keystore host:
$ docker run -d \
             -p "8500:8500" \
             -h "consul" \
             progrium/consul -server -bootstrap

    After setting up the key-value store, it is time to create a swarm. Here we use "docker-machine create" to create multiple hosts and make one of them the manager of the swarm.
    1. Create a swarm manager:
$ docker-machine create \
     -d virtualbox \
     --swarm --swarm-master \
     --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
     --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \
     --engine-opt="cluster-advertise=eth1:2376" \
     mhs-demo0

        On creation, we provide the engine daemon with the "cluster-store" option, which tells the engine the location of the key-value store for the overlay network. The "cluster-advertise" option is used to advertise the machine on the network.
    2. Create another machine and add it to the swarm:
$ docker-machine create \
     -d virtualbox \
     --swarm \
     --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
     --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \
     --engine-opt="cluster-advertise=eth1:2376" \
     mhs-demo1

    3. View the running status of the created machine:
$ docker-machine ls
NAME         ACTIVE   DRIVER       STATE     URL                         SWARM
default      -        virtualbox   Running   tcp://192.168.99.100:2376
mh-keystore  *        virtualbox   Running   tcp://192.168.99.103:2376
mhs-demo0    -        virtualbox   Running   tcp://192.168.99.104:2376   mhs-demo0 (master)
mhs-demo1    -        virtualbox   Running   tcp://192.168.99.105:2376   mhs-demo0

    Now that the host farm is ready, it's time to create a multi-host overlay network for containers running on those hosts.
    1. Switch the Docker environment to the swarm manager:
$ eval $(docker-machine env --swarm mhs-demo0)

       The "--swarm" flag is used here to limit Docker commands to swarm information only.
    2. View swarm information:
$ docker info
Containers: 3
Images: 2
Role: primary
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 2
mhs-demo0: 192.168.99.104:2376
└ Containers: 2
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs
mhs-demo1: 192.168.99.105:2376
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs
CPUs: 2
Total Memory: 2.043 GiB
Name: 30438ece0915

       This shows that there are three containers and two images running on the manager.
    3. Create the overlay network (just create it on any machine in the swarm):
$ docker network create --driver overlay --subnet=10.0.9.0/24 my-net

    4. View the running network:
$ docker network ls
 NETWORK ID          NAME                DRIVER
 412c2496d0eb        mhs-demo1/host      host
 dd51763e6dd2        mhs-demo0/bridge    bridge
 6b07d0be843f        my-net              overlay
 b4234109bd9b        mhs-demo0/none      null
 1aeead6dd890        mhs-demo0/host      host
 d0bb78cbe7bd mhs-demo1/bridge bridge
 1c0eb8f69ebb        mhs-demo1/none      null

       Since you are on the swarm manager, you can see all the networks on the swarm agent, including the default network for each engine and the newly created overlay network. If you switch to another swarm proxy, you will only see that proxy's default network and the overlay network here as a multi-master.
    At this point, our multi-host network has been created. Then you can start a container on any of these hosts and it will automatically become part of the network. Let's test it.
    1. Switch the context to the swarm manager:
$ eval $(docker-machine env --swarm mhs-demo0)

    2. Start the Nginx service on the mhs-demo0 instance:
$ docker run -itd --name=web --network=my-net --env="constraint:node==mhs-demo0" nginx

    3. Run the BusyBox container on the mhs-demo1 instance and get the content of the Nginx server home page:
$ docker run -it --rm --network=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web
### Omit output

    Another thing to note is that containers connected to a multi-host network are automatically connected to the docker_gwbridge network, which allows containers to connect externally outside of their cluster. The network can be seen using the "docker network ls" command on either swarm agent:
$ eval $(docker-machine env mhs-demo0)
$ docker network ls
NETWORK ID          NAME                DRIVER
6b07d0be843f        my-net              overlay
d0bb78cbe7bd        bridge              bridge
1c0eb8f69ebb        none                null
412c2496d0eb        host                host
97102a22e8d2        docker_gwbridge     bridge

    At this point, if you look at the network interface of the Nginx container, you will see the following:
$ docker exec web ip addr
 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
 22: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
 link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff
 inet 10.0.9.3/24 scope global eth0
     valid_lft forever preferred_lft forever
 inet6 fe80::42:aff:fe00:903/64 scope link
     valid_lft forever preferred_lft forever
 24: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
 link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
 inet 172.18.0.2/16 scope global eth1
     valid_lft forever preferred_lft forever
 inet6 fe80::42:acff:fe12:2/64 scope link
     valid_lft forever preferred_lft forever

    It can be seen that the eth0 interface represents the container interface connected to the my-net overlay network, while eth1 represents the container interface connected to the docker_gwbridge network.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326065386&siteId=291194637