Docker quick start 4-docer network

Docker quick start 4-docer network

There will be a virtual network device on the host where the docker engine is installed docker0. Its IP address is 172.17.0.1. You can think of it as a virtual switch (bridge). When you create a container (the default network mode is brigde), it will Create one at the same time 虚拟的网络连接, with one end connected to the container and the other end connected to docker0the virtual switch. The default IP assigned by the virtual network card in the container is within the network 172.17.0.0/16segment.

root@node01:~# docker container ls
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS               NAMES
f705f6f4779a        busybox:latest       "sh"                     7 minutes ago       Up 7 minutes                            bbox01
83436ed405c7        busybox-httpd:v0.2   "/bin/httpd -f -h /d…"   45 minutes ago      Up 45 minutes                           httpd-01
# 安装网桥管理工具
root@node01:~# apt-get install bridge-utils
root@node01:~# brctl show  # 查看网桥
bridge name bridge id       STP enabled interfaces
docker0     8000.02425749873b   no      veth9cb81f9
                                                                    veth9f1b4f7

root@node01:~# ip link show
...
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 02:42:57:49:87:3b brd ff:ff:ff:ff:ff:ff
13: veth9f1b4f7@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
    link/ether 26:8d:9e:92:aa:a6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
21: veth9cb81f9@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
    link/ether 1a:94:6b:46:8a:8c brd ff:ff:ff:ff:ff:ff link-netnsid 1

If you want to access resources outside the host in the container, address masquerading will be performed. The default is to use iptable to achieve

oot@node01:~# iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 21 packets, 2248 bytes)
 pkts bytes target     prot opt in     out     source               destination
    4   256 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 18 packets, 2046 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 1545 packets, 116K bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 1545 packets, 116K bytes)
 pkts bytes target     prot opt in     out     source               destination
    3   202 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0

among them

Chain POSTROUTING (policy ACCEPT 1545 packets, 116K bytes)
 pkts bytes target     prot opt in     out     source               destination
    3   202 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0

Indicates that data from 172.17.0.0/16any address source in the network, if you want to access resources that are not from the docker0 device, that is, access to resources other than the host will do MASQUERADE.

Docker's network model

Docker quick start 4-docer network

The first type: Closed container, which means that this type of container only has a Loopback address and cannot make network-related requests.

The second type: Bridged container, bridged network, which is the default network method when creating a container

The third type: Joined container, federated network, means that multiple containers share the three namespaces of UTC, IPC, and NET, that is, multiple containers have the same host name and the same network device

The fourth type: Open container, open network, shared host's network namespace

Network namespace exploration

In order not to affect node01the environment, open another host node02. First create two network namespaces

root@node02:~# ip netns add ns01
root@node02:~# ip netns add ns02
root@node02:~# ip netns list
ns02
ns01

Create a pair of virtual network devices

root@node02:~# ip link add name veth1.1 type veth peer name veth1.2
root@node02:~# ip link show type veth
3: [email protected]: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 4a:1c:b7:38:0f:5e brd ff:ff:ff:ff:ff:ff
4: [email protected]: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 36:72:d3:88:4c:5d brd ff:ff:ff:ff:ff:ff

Assign a virtual network card to the ns01namespace

root@node02:~# ip link set dev veth1.2 netns ns01
root@node02:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:aa:9b:4f brd ff:ff:ff:ff:ff:ff
4: veth1.1@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 36:72:d3:88:4c:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0
# 查看ns01名称空间的网络设备        
root@node02:~# ip netns exec ns01 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1.2: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 4a:1c:b7:38:0f:5e  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

root@node02:~# ip netns exec ns01 ip link set dev veth1.2 name eth0  # 还可以修改设备名称
root@node02:~# ip netns exec ns01 ifconfig -a
eth0: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 4a:1c:b7:38:0f:5e  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Now that there is only veth1.1this virtual network card on the host, it has veth1.2been moved to the ns01namespace.

Configure IP addresses for two virtual devices and activate them

root@node02:~# ifconfig veth1.1 10.0.0.1/24 up
root@node02:~# ip netns exec ns01 ifconfig eth0 10.0.0.2/24 up
root@node02:~# ip netns exec ns01 ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.2  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::481c:b7ff:fe38:f5e  prefixlen 64  scopeid 0x20<link>
        ether 4a:1c:b7:38:0f:5e  txqueuelen 1000  (Ethernet)
        RX packets 9  bytes 726 (726.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 656 (656.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

root@node02:~# ifconfig veth1.1
veth1.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.1  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::3472:d3ff:fe88:4c5d  prefixlen 64  scopeid 0x20<link>
        ether 36:72:d3:88:4c:5d  txqueuelen 1000  (Ethernet)
        RX packets 10  bytes 796 (796.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 796 (796.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Test the connectivity of virtual network cards in different namespaces

root@node02:~# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.059 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.091 ms
64 bytes from 10.0.0.2: icmp_seq=5 ttl=64 time=0.058 ms
^C
--- 10.0.0.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4031ms
rtt min/avg/max/mdev = 0.043/0.062/0.091/0.015 ms
root@node02:~# ip netns exec ns01 ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.087 ms
64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.084 ms
^C
--- 10.0.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3040ms
rtt min/avg/max/mdev = 0.020/0.059/0.087/0.029 ms

You can also veth1.1move the host computer to the ns02namespace

root@node02:~# ip link set dev veth1.1 netns ns02
root@node02:~# ip netns exec ns02 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1.1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 36:72:d3:88:4c:5d  txqueuelen 1000  (Ethernet)
        RX packets 23  bytes 1874 (1.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23  bytes 1874 (1.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# 移动后IP地址信息丢失,需要重新设置
root@node02:~# ip netns exec ns02 ifconfig veth1.1 10.0.0.3/24 up
root@node02:~# ip netns exec ns02 ifconfig
veth1.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.3  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::3472:d3ff:fe88:4c5d  prefixlen 64  scopeid 0x20<link>
        ether 36:72:d3:88:4c:5d  txqueuelen 1000  (Ethernet)
        RX packets 25  bytes 2054 (2.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 42  bytes 3048 (3.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

root@node02:~# ip netns exec ns02 ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.061 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.060 ms
^C
--- 10.0.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2030ms
rtt min/avg/max/mdev = 0.060/0.084/0.132/0.034 ms

Service exposure

First organize some options when running a container

root@node01:~# docker container run \
--name bbox-03 \
-i \
-t \
--network bridge \
--hostname bbox03.learn.io \
--add-host b.163.com:1.1.1.1 \
--add-host c.163.com:2.2.2.2 \
--dns 114.114.114.114 \
--dns 8.8.8.8 \
--rm \
busybox:latest
--network   指定容器使用的网络模型,none, host, bridge,默认为bridge
--hostname  指定容器的主机名,如果不指定为容器的ID
--add-host  为容器的/etc/hosts增加一条解析记录,可以多次使用
--dns       为容器设置dns服务器,可以多次使用
--rm        表示退出容器后自动删除容器

There are 4 ways of service exposure

docker container run -p &lt;containerPort&gt; Map the specified container port to a dynamic port of all addresses of the host

docker container run -p &lt;hostPort&gt;:&lt;containerPort&gt; Map the container port to the specified port of all addresses of the host

docker container run -p &lt;ip&gt;::&lt;containerPort&gt; Map the container port to the dynamic port of the specified IP of the host

docker container run -p &lt;ip&gt;:&lt;hostPort&gt;:&lt;containerPort&gt; Map the container port to the specified port of the host's specified IP

If you want to expose multiple ports, -pyou can use multiple times

root@node01:~# docker container run -i -t --name httpd-01 --rm -p 80 busybox-httpd:v0.2

root@node01:~# docker container ls
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                   NAMES
3708cbbc6a99        busybox-httpd:v0.2   "/bin/httpd -f -h /d…"   10 seconds ago      Up 9 seconds        0.0.0.0:32768->80/tcp   httpd-01
root@node01:~# docker port httpd-01  # 查看端口映射情况
80/tcp -> 0.0.0.0:32768

-p 80:80Time

root@node01:~# docker port httpd-01
80/tcp -> 0.0.0.0:80

-p 192.168.101.40::80Time

root@node01:~# docker port httpd-01
80/tcp -> 192.168.101.40:32768

-p 192.168.101.40:8080:80Time

root@node01:~# docker port httpd-01
80/tcp -> 192.168.101.40:8080

Alliance mode and host network

Multiple docker containers can share the network name space, that is, multiple containers share network devices.

First busybox:latestrun a container based on the image

root@node01:~# docker container run -i -t --rm --hostname b1 --name bbox-01 busybox:latest
/ # hostname
b1
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1116 (1.0 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Run another container from another terminal and add --network container:bbox-01options

root@node01:~# docker container run -i -t --rm --hostname b2 --name bbox-02 --network container:bbox-01  busybox:latest
docker: Error response from daemon: conflicting options: hostname and the network mode.
See 'docker run --help'.
root@node01:~# docker container run -i -t --rm  --name bbox-02 --network container:bbox-01  busybox:latest
/ # hostname
b1
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1116 (1.0 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Found bbox-01the bbox-02identical network address of the two containers. And --network container:bbox-01after the option --hostnameis used, it conflicts with and the hostnames of the two containers are also the same. The two containers share the 网络名称空间sum 主机名名称空间.

In order to further verify that the two containers share the network name space, enable an httpd service in the container running on the first terminal

/ # echo "Hello Word." > /tmp/index.html
/ # httpd -h /tmp
/ # netstat -tan
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 :::80                   :::*                    LISTEN

Go to the container of the second terminal to view the network monitoring

 # netstat -tanl
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 :::80                   :::*                    LISTEN
/ # wget -O - -q http://localhost
Hello Word.
/ #

Port 80 is also monitored.

Since the network name space can be shared between two containers, the container can also share the host's network

root@node01:~# docker container run -i -t --rm --name bbox-04 --network host busybox:latest
/ # hostname
node01
/ # ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:57:49:87:3B
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:57ff:fe49:873b/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:927 (927.0 B)  TX bytes:3376 (3.2 KiB)

ens33     Link encap:Ethernet  HWaddr 00:0C:29:96:48:2C
          inet addr:192.168.101.40  Bcast:192.168.101.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe96:482c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:34294 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15471 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:22539440 (21.4 MiB)  TX bytes:1727705 (1.6 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:290 errors:0 dropped:0 overruns:0 frame:0
          TX packets:290 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:28034 (27.3 KiB)  TX bytes:28034 (27.3 KiB)

The obtained host name and network devices are all of the host. In this way, running a service in the container to monitor a port can be accessed from the outside by accessing the host's network address. The advantage of this is that the program is packaged in the container, and the network uses the host's network. If the host is damaged Or you need to deploy multiple programs, just copy the image to other hosts running the docker engine and run it directly, and the deployment becomes simple.

Customize docker0 and daemon monitoring

docker0 attribute definition

By default, docker0the address of the virtual device is 172.17.0.1the subnet address assigned by the 172.17.0.0/16container. The default nameserver of the container is the nameserver used by the host, and docker0the ip address pointed to by the default gateway . These information can be customized.

# 自定义docer0桥的网络属性: /etc/docker/daemon.json 文件
{
    "bip": "10.1.0.1/16",
    "fixed-cidr": "10.1.0.0/16",
    "fixed-cidr-v6": "",
    "mtu": 1500,
    "default-gateway": "",
    "default-gateway-v6": "",
    "dns": ["",""]
}

The most important thing is bipthat bridge ipmost of the others can be calculated. If you want to modify the network address of docker0 and the ip address assigned by the container, just modify it bip, and then restart the docker process.

dockerd listens to network sockets

method one

dockerdThe C/Smodel of the daemon , unix socketthe address of the default listening format, the location is there /var/run/docker.sock, if you want to use a TCP socket, /etc/docker/daemon.jsonadd hoststhis key in

"hosts" ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
root@node01:~# vim /etc/docker/daemon.json
{
        "registry-mirrors": [
                "https://1nj0zren.mirror.aliyuncs.com",
                "https://docker.mirrors.ustc.edu.cn",
                "http://registry.docker-cn.com"
        ],
        "insecure-registries": [
                "docker.mirrors.ustc.edu.cn"
        ],
        "debug": true,
        "experimental": true,
        "hosts": ["unix:///var/run/docker.sock","tcp://0.0.0.0:2375"]
}

Close dockerd

root@node01:/lib/systemd/system# systemctl stop docker
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket

There is a warning message, the attempt to start failed

root@node01:/lib/systemd/system# systemctl start docker
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

Modify /lib/systemd/system/docker.servicefile

[Service]
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
修改为
ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock
root@node01:/lib/systemd/system# systemctl daemon-reload  # docker.service更改后需要重新加载
root@node01:/lib/systemd/system# systemctl start docker
root@node01:/lib/systemd/system# ss -tanl
State              Recv-Q              Send-Q                            Local Address:Port                            Peer Address:Port
LISTEN             0                   128                               127.0.0.53%lo:53                                   0.0.0.0:*
LISTEN             0                   128                                     0.0.0.0:22                                   0.0.0.0:*
LISTEN             0                   128                                           *:2375                                       *:*
LISTEN             0                   128                                        [::]:22                                      [::]:*

2375 has been monitored. But stopping docker may have a warning message, and I don’t know the impact

root@node01:/lib/systemd/system# systemctl stop docker
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
root@node01:/lib/systemd/system# systemctl start docker
root@node01:/lib/systemd/system# ss -tanl | grep 2375
LISTEN   0         128                       *:2375                   *:*

Call the docker command on node2 to operate the resources on node1

root@node02:~# docker -H 192.168.101.40:2375 image ls
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
busybox-httpd            v0.2                985f056d206d        12 hours ago        1.22MB
zhaochj/httpd            v0.1                985f056d206d        12 hours ago        1.22MB
busybox-httpd            v0.1                806601ab5565        12 hours ago        1.22MB
nginx                    stable-alpine       8c1bfa967ebf        7 days ago          21.5MB
busybox                  latest              c7c37e472d31        2 weeks ago         1.22MB
quay.io/coreos/flannel   v0.12.0-amd64       4e9f801d2217        4 months ago        52.8MB

Method Two

For more information, please refer to: https://docs.docker.com/engine/reference/commandline/dockerd/

Modify the /lib/systemd/system/docker.servicefile directly , no need to modify the /etc/docker/daemon.jsonfile

[Service]
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
修改为
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sock
root@node01:/lib/systemd/system# systemctl daemon-reload
root@node01:/lib/systemd/system# systemctl stop docker
root@node01:/lib/systemd/system# systemctl start docker
root@node01:/lib/systemd/system# ss -tanl | grep 2375
LISTEN   0         128                       *:2375                   *:*

Monitoring on the network socket docker thinks this is potentially risky and insecure, so it is not recommended to enable it.

Guess you like

Origin blog.51cto.com/zhaochj/2536320