Docker networks bridge mode

This article deals about some of the concepts Docker network, so you can take full advantage of these features when designing and deploying applications.

4feb2b49b92548f0b818ba27babadf29

First, the network subsystem Docker pluggable drive type, or the presence of multiple network interfaces supported by default, such as bridge, host, overlay, macvlan and none type of network interface.

Next, let's discuss the next bridge (bridge) mode network

docker bridge is the default mode network, if you do not specify a type, then this is the type of network you are creating, bridge mode is assigned a Network Namespace for each container, IP and other network and the container is connected to a bridge (docker0 )on.

Features: on the same host by default all containers in the same network segment (default segment: 172.17.0.0/16) under, and may communicate with and access the external network (provided that the host can access the external network) to each other.

Bridge is a link layer device forwards traffic between multiple network segments. Bridge can be run on the host kernel software or hardware devices.

Docker using a software bridge that allows communication between the connecting bridge of the same vessel, while providing the container is not connected to the bridge network isolation. Docker bridge rules set automatically (with iptables), different containers bridge can not directly communicate with each other.

Bridge in the same applies to a container Docker daemon process. For communication between the container running at different Docker daemon can add routes on the host, can also be used overlay network.

When starting Docker daemon, automatically create a default network bridge (docker0) and will start iptables to set up your access rules. You can create a network bridge user-defined, user-defined network bridge is better than the default bridge (docker0), the following features:

1, user-defined bridge provides better isolation between the container and the interoperability of applications.

Connected in the same containers belong to a network bridge, all ports can thus visits, but is not open to the external network, which makes the container of the application can easily communicate with each other, while improving safety.

Imagine a web having front and rear ends, in addition to the application database schema (user <-> web inlet <-> backend application <-> database). External network only need to access the web front end (such as port 80), but only the back-end application needs to access the database. If you use a user-defined Bridge, just opening up web ports, the database does not need to open any ports, front and back end database can be accessed via a bridge user-defined.

2, user-defined bridges provided between the container automatically DNS resolution.

Created on the default container bridge, when mutual visits only by IP address, unless you use --link option, but --link is the need to create a container in both directions for more than two containers need to communicate this becomes complicated. And in the user-defined bridge network, between each container can be resolved by name or alias.

Recall that when a physical host or VM host configuration file our application in general with the hosts in the host name or IP, and now in the container, using a custom network, as long as we rewrite the container name just fine, do not be too concerned about the host name or IP, is not very open forest.

3, user-defined container bridge supports always disconnected or connected to a different (user-defined) network.

In the life cycle of a container, you can dynamically switch the network connection between the container, for example, you create a custom my-net01 and my-net02 bridged network, both networks of each container can be dynamically switched. If you are by default create container bridge, you can also switch to a custom bridge, without having to delete the vessel reconstruction.

Note: Here in the actual verification process, the seemingly official documents describe the discrepancy.

4, each user-defined networks are creating a bridge configurable.

If all containers with the default bridge network, although the configuration may be modified, but all use the same containers provided, and for example, MTU iptables rules. In addition, configure the default network bridge Docker need to restart the process.

If created with docker network create and configure user-defined network bridge. If you were to configure different application groups have different network needs, you can create user-defined for each bridge.

5, the container may be shared between the default environment variables bridge.

At first, the only way to share environment variables between the two containers is to use --link. This type of shared variable user-defined in the network is not possible. Now, however, there is a better way to share environment variables.

(1), a plurality of containers may be used Docker volumes (Volume) comprising files or directories to load sharing information.

(2), a plurality of containers may be used to start the docker compose together, Compose file shared variable can be defined.

(3), may be used instead of service swarm separate container, and the shared key and configuration.

Manage Custom bridge network

1, docker network ls view the default network support

# docker network ls

NETWORK ID NAME DRIVER SCOPE

e22a6ab223fe bridge bridge local

15b417347346 host host local

5926c0bd11d0 none null local

2, create a custom docker network create network bridge

# docker network create my-web-net01

# docker network ls

NETWORK ID NAME DRIVER SCOPE

e22a6ab223fe bridge bridge local

15b417347346 host host local

899362727b48 my-web-net01 bridge local

5926c0bd11d0 none null local

We can see, we did not specify --driver = bridge to create, because the default mode is the bridge, of course, you can also specify a custom subnet bridge, IP range, gateways and other items, such as:

# docker network create --driver=bridge --subnet=172.23.10.0/24 my-web-net02

Or a little further refinement

# docker network create \

--driver=bridge \

--subnet=172.24.0.0/16 \

--ip-range=172.24.10.0/24 \

--gateway=172.24.10.254 \

my-web-net03

To delete a custom network bridge

# docker network ls

NETWORK ID NAME DRIVER SCOPE

e22a6ab223fe bridge bridge local

15b417347346 host host local

899362727b48 my-web-net01 bridge local

49352768c5dd my-web-net02 bridge local

7e29b5afd1be my-web-net03 bridge local

5926c0bd11d0 none null local

# docker network rm my-web-net01

Refers to the amount or delete

# Docker network rm $ (docker network ls -f name = my-web -q) # This command is the meaning of the name contains my-web network list and delete

With a custom bridge in the container by container name to visit each other

1. Create a custom network my-web-net01

# docker network create my-web-net01

2, create a container t01, t02 and use my-web-net01 network

# docker run -idt --network=my-web-net01 --name t01 busybox /bin/sh

# docker run -idt --network=my-web-net01 --name t02 busybox /bin/sh

3, using the ping command from t01, t02 interoperate using ping container name, network authentication interoperability

# docker exec -it t01 ping t02

# docker exec -it t01 ping t02

Conclusion: The same custom network can access each container by container name

In the default container bridge can not access each other through the container name

1, create t03, t04 container bridge in default

# docker run -idt --name=t03 busybox /bin/sh

# docker run -idt --name=t04 busybox /bin/sh

2, using the ping command from t03, t04 interoperate using ping container name, network authentication interoperability

# docker exec -it t03 ping t04

ping: bad address 't04'

# docker exec -it t04 ping t03

ping: bad address 't03'

3, using the ping command from t03, t04 mutual interoperability using ping ip address, network authentication

# docker exec -it t03 ifconfig

# docker exec -it t04 ifconfig

# docker exec -it t03 ping 172.17.0.6

# docker exec -it t04 ping 172.17.0.4

Conclusion: In the default container bridge in the container can not visit each other names, but can ip visits.

Custom bridge in a container, can dynamically switch the network connection between the container

1. Create a custom bridge my-web-net02

# docker network create my-web-net02

2, the container t02 in my-web-net01 bridge simultaneously added to my-web-net02 network, and a network view t02, shown in Figure 1.1, a more t02 found eth1 card.

# docker network connect my-web-net02 t02

clip_image001

Figure 1.1

3, the t02 out my-web-net01 network, and then, again we see the network connection t02, as shown in Figure 1.2, and found that only the card eth1

# docker network disconnect my-web-net01 t02

clip_image002

4, once again to verify whether it can communicate with t02 T01, shown in Figure 1.3.

clip_image003

Figure 1.3

Conclusion: The default has been unable to communicate, that is, use the ip can not, because t01 belongs to my-web-net01, t02 belongs to my-web-net02.

5, will be added back to t02 my-web-net01 network, authentication interoperability, shown in Figure 1.4.

# docker network connect my-web-net01 t02

clip_image004

Figure 1.4

Conclusion: t01 and t02 can communicate.

The default container bridge is added to the custom bridge, authentication interoperability

1, the bridge t03 added to the custom my-web-net01 and verify t03 and t01 and t04 of interoperability, shown in Figure 1.5.


clip_image005

Figure 1.5

2, the container is removed from and re-enter a network name of the NIC will change, as shown in Figure 1.6.

clip_image006

Figure 1.6

to sum up

Whether a custom default container bridge or the bridge, support dynamic switching network connections between the container, in addition, the vessel was doing card name change (the name of the vessel will be restored after the restart) occurs when the network switch, which need to be tied given the name of the card application there will be problems, such as Ali tair cache.

A willing heart, Nothing is impossible, come on!

Guess you like

Origin blog.51cto.com/firefly222/2462603