[Docker] study notes (two)

When Docker is installed, it will automatically create three networks, bridge (create a container to connect to this network by default), none, host

Network mode
Introduction
Host The container will not virtualize its own network card, configure its own IP, etc., but use the host's IP and port.
Bridge This mode will allocate and set IP for each container, connect the container to a docker0 virtual bridge, and communicate with the host through the docker0 bridge and the Iptables nat table configuration.
None This mode turns off the network function of the container.
Container The created container will not create its own network card and configure its own IP, but will share the IP and port range with a specified container.

1. Default network

When you install Docker, it will automatically create three networks. You can use the following docker network lscommand to list these networks:

Insert picture description here

Docker built these three networks, when you run the container, you can use this –networkflag to specify which container should be connected to the network.

The bridge network representative docker0It is the network that exists in all Docker installations. Unless you use this docker run --network=XXXoption you specify otherwise, the default Docker daemon will connect to this network container.

Insert picture description here

We use docker runwhen creating a Docker container, you can use --netthe network mode options specified container, Docker can have the following four kinds of network modes:

  • host mode: Use --net=hostspecified
  • none mode: use to --net=nonespecify
  • bridge mode: Use --net=bridgespecified, the default settings
  • container mode: Use --net=container:NAME_or_IDspecified

2. Host mode

It is equivalent to the bridge mode in VMware. It is in the same network as the host machine, but does not have a separate IP address .

As we all know, Docker uses Linux's Namespaces technology for resource isolation, such as PID Namespace isolation process, Mount Namespace isolation file system, Network Namespace isolation network, etc.

A Network Namespace provides an independent network environment, including network cards, routing, Iptable rules, etc., are isolated from other Network Namespaces. A Docker container is generally assigned an independent Network Namespace. But if the host mode is used when starting the container, the container will not get an independent Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own network card, configure its own IP, etc., but use the host's IP and port .

Insert picture description here

As you can see, the container's network uses the host's network (here is my Tencent Cloud's intranet IP), but other aspects of the container, such as the file system, process list, etc., are still isolated from the host.

3. Container mode

After understanding the host mode, this mode is easy to understand. This mode specifies that the newly created container shares a Network Namespace with an existing container instead of sharing with the host . The newly created container will not create its own network card, configure its own IP, but share the IP, port range, etc. with a specified container . Similarly, in addition to the network aspects of the two containers, other things such as file systems and process lists are still isolated . The processes of the two containers can communicate through the IO network card device.

4. None mode

This mode places the container in its own network stack, but does not perform any configuration. In fact, this mode turns off the network function of the container , which is useful in the following two situations: the container does not require a network (for example, only the batch task of writing disk volumes).

5. Bridge mode

Equivalent to the Nat mode in VMware, the container uses an independent Network Namespace and is connected to the docker0 virtual network card (default mode). Communicate with the host through the docker0 bridge and the Iptables nat table configuration; the bridge mode is the default network setting of Docker. This mode will allocate Network Namespace, set IP, etc. for each container, and connect the Docker container on a host to a virtual On the bridge. The following focuses on this mode.

5.1 Topology of Bridge Mode

When the Docker server starts, a virtual bridge named docker0 will be created on the host, and the Docker container started on this host will be connected to this virtual bridge. The virtual bridge works like a physical switch, so that all containers on the host are connected to a Layer 2 network through the switch . The next step is to assign an IP to the container. Docker will select an IP address and subnet different from the host from the private IP network segment defined by RFC1918 and assign it to docker0. The container connected to docker0 will select from this subnet. An unoccupied IP is used. Such as general Docker use 172.18.0.0/16 network segment, and 172.18.0.1/16 allocated to docker0 bridge (on the host using the ip addrcommand can be seen docker0 may think it is a bridge management interface, in Used as a virtual network card on the host).

This may be because my Tencent Cloud intranet is 172.17.XX.XX, so by default docker0 uses the 18 network segment

The network topology in a stand-alone environment is as follows:

Insert picture description here

5.2 Detailed Explanation of Bridge Network Mode

Docker completes the above network configuration process roughly like this:

  1. Create a pair of virtual network card veth pair devices on the host. Veth devices always appear in pairs, they form a data channel, and data enters from one device, and then comes out from another device . Therefore, veth devices are often used to connect two network devices.

  2. Docker puts one end of the veth pair device in the newly created container and named it eth0. The other end is placed in the host, named after a similar name like veth5f56268, and this network device is added to the docker0 bridge, which can be viewed through the brctl show command.

    [root@VM-0-4-centos ~]# brctl show
    bridge name	  bridge id	          STP enabled     interfaces
    docker0	      8000.0242652b8c9d   no              veth5f56268
    
  3. Assign an IP from the docker0 subnet to the container and set the IP address of docker0 as the default gateway of the container.

    Run the container:

    docker run --name=nginx_bridge --net=bridge -p 80:80 -d nginx
    

    View the container:

    docker ps
    

    Insert picture description here

    docker inspect nginx_bridge
    

    Insert picture description here

    docker network inspect bridge
    

    Insert picture description here

6. --link

Consider a scenario where we wrote a microservice. How to access the container by name to avoid the inaccessibility of the project due to the change of the container IP?

Insert picture description here

We can use --linkto solve

docker run -d -P --name tomcat03 --link tomcat02 tomcat

Insert picture description here

However, this connection is one-way:

Insert picture description here

Real development will not use --link

7. Custom network

It is recommended to use a custom bridge to control which containers can communicate with each other, and it can also automatically resolve container names to IP addresses through DNS. Docker provides default network drivers for creating these networks. You can create a new Bridge network, Overlay or Macvlan network. You can also create a network plug-in or remote network for complete customization and control.

You can create as many networks as you need, and you can connect containers to zero or more of these networks at any given time. In addition, you can connect and disconnect running containers in the network without restarting the container. When a container is connected to multiple networks, its external connections are provided in lexical order through the first non-internal network.

A Bridge network is the most commonly used network type in Docker. The bridged network is similar to the default Bridge network, but adds some new features and removes some old capabilities. The following example creates some bridged networks and performs some experiments on containers on these networks.

docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet

Insert picture description here

Our own network is created:

docker network inspect mynet

Insert picture description here

Start the container with the created network:

docker run -d -P --name tomcat-net-01 --net mynet tomcat
docker run -d -P --name tomcat-net-02 --net mynet tomcat

View the created network again:

docker network inspect mynet

Insert picture description here

At this time, the two containers can be pinged

docker run -d -P --name tomcat-net-01 --net mynet tomcat
docker run -d -P --name tomcat-net-01 --net mynet tomcat

The two containers can ping each other!

8. Network connectivity

There is no intercommunication between docker0-based containers and mynet-based containers, so network connectivity is required, or tomcat as an example:

docker run -d -P --name tomcat01 tomcat

Now connect tomcat01 to mynet

docker network connect mynet tomcat01

Now check out mynet

docker network inspect mynet

Insert picture description here

One container, 2 IPs, similar to a cloud server, with public IP and intranet IP

At this point, you can ping through again

Insert picture description here

But tomcat02 still cannot ping:

Insert picture description here

Guess you like

Origin blog.csdn.net/dreaming_coder/article/details/113881431