Network Configuration container docker

1.Linux kernel implementation namespace creation

1.1ip netns command

You can complete the operation of the network namespace by ip netns command. ipnetns iproute command from the installation package, the system will typically be installed, if not, install their own.

Note: You need sudo privileges to modify the network configuration ip netns command.

Related operations can be done on the network namespace by ip netns command, you can view the command help information via ip netns help:

[root@localhost ~]# ip netns help
Usage: ip netns list
       ip netns add NAME
       ip netns set NAME NETNSID
       ip [-all] netns delete [NAME]
       ip netns identify [PID]
       ip netns pids NAME
       ip [-all] netns exec [NAME] cmd ...
       ip netns monitor
       ip netns list-id

By default, Linux system is no network namespace, ip netns list command does not return any information.

1.2 Creating Network namespaces

Create a namespace called ns0 by the command:

[root@localhost ~]# ip netns list
[root@localhost ~]# ip netns add ns0
[root@localhost ~]# ip netns list
ns0

If the namespace of the same name already exists, the command will report can not create namespace file "/ var / run / netns / ns0": there is an error in the file.

[root@localhost ~]# ls /var/run/netns/
ns0
[root@localhost ~]# ip netns add ns0
Cannot create namespace file "/var/run/netns/ns0": File exists

For each network namespace, it will have its own separate network card, routing tables, ARP tables, iptables and other network-related resources.

1.3 Operating Network namespaces

ip ip netns exec command provides subcommands execute a command corresponding network namespace.

View the newly created network namespace card information

[root@localhost ~]# ip netns exec ns0 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

You can see the network name of the newly created space will create a lo loopback adapter, the card is closed at this time. In this case, try to ping the loopback lo card, you will be prompted network unreachable

[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
connect: 网络不可达

Lo loopback adapter is enabled by the following command

[root@localhost ~]# ip netns exec ns0 ip link set lo up
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.031 ms

1.4 Transfer Equipment

We can transfer device (such as veth) between different Network Namespace. Because a device can only belong to a Network Namespace, so after the transfer within the Network Namespace do not see the device up.

Wherein, the device belong to Veth transfer device, and many other devices (e.g., lo, vxlan, ppp, bridge, etc.) is not transferred.

1.5 one pair

VETH for the full name of the virtual Ethernet, is a pair of ports, all ports from one end of the incoming data packet will come out the other end, and vice versa.
VETH pair is introduced for direct communication network different namespaces, you can use it directly connects two network namespaces.

1.6 Creating seventh on

[root@localhost ~]# ip link add type veth
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:88:34:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.249.131/24 brd 192.168.249.255 scope global noprefixroute dynamic ens33
       valid_lft 1122sec preferred_lft 1122sec
    inet6 fe80::ad8c:10d3:b579:8614/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:38:4e:31:46 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ca:23:e1:4b:92:a3 brd ff:ff:ff:ff:ff:ff
5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether f6:bd:9e:56:78:91 brd ff:ff:ff:ff:ff:ff

You can see, this time the system added a pair veth right, and the veth0 veth1 two virtual network card to connect up, which at this time is in a state of "not enabled" to veth.

1.7 namespace achieve inter-network communication

Here we use veth communication between two different networks to achieve namespace. Just now we have created a network called ns0 namespace, and then create an information network namespace below, named ns1

[root@localhost ~]# ip netns add ns1
[root@localhost ~]# ip netns list
ns1
ns0

Then we will veth0 added to ns0, will veth1 added to ns1

[root@localhost ~]# ip link set veth0 netns ns0
[root@localhost ~]# ip link set veth1 netns ns1

Then we were this veth pair configuration ip address and enable them

[root@localhost ~]# ip netns exec ns0 ip link set veth0 up
[root@localhost ~]# ip netns exec ns0 ip addr add 10.0.0.1/24 dev veth0
[root@localhost ~]# ip netns exec ns1 ip link set lo up
[root@localhost ~]# ip netns exec ns1 ip link set veth1 up
[root@localhost ~]# ip netns exec ns1 ip addr add 10.0.0.2/24 dev veth1

Check the status of this veth pair

[root@localhost ~]# ip netns exec ns0 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
4: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ca:23:e1:4b:92:a3 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 10.0.0.1/24 scope global veth0
       valid_lft forever preferred_lft forever
    inet6 fe80::c823:e1ff:fe4b:92a3/64 scope link 
       valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f6:bd:9e:56:78:91 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.0.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::f4bd:9eff:fe56:7891/64 scope link 
       valid_lft forever preferred_lft forever

From the above can be polished, we have successfully enabled the veth right, and assigned the corresponding ip address for each veth device. We tried to access ip address ns0 in the ns1 in:

[root@localhost ~]# ip netns exec ns1 ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.037 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.042 ms

You can see, the seventh for the successful implementation of network interaction between two different networks namespace.

1.8veth device renaming

[root@localhost ~]# ip netns exec ns0 ip link set veth0 down
[root@localhost ~]# ip netns exec ns0 ip link set dev veth0 name eth0
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether ca:23:e1:4b:92:a3 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 10.0.0.1/24 scope global eth0
       valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec ns0 ip link set eth0 up

2. typical network configuration mode

2.1 Bridge Mode Configuration

[root@localhost ~]# docker run -it --name t1 --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # exit
[root@localhost ~]# docker container ls -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@localhost ~]# docker run -it --name t1 --network bridge --rm busybox 
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # exit

2.2 No Mode Configuration

[root@localhost ~]# docker run -it --name t1 --network none --rm busybox 
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
/ # exit

2.3 Configuration Mode vessel

Start the first container

[root@localhost ~]# docker run -it --name b1 --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

Start a second container

[root@localhost ~]# docker run -it --name b2 --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

You can see the IP address of the vessel called b2 is 172.17.0.3, the IP address of the first container is not the same, and no means shared network, at this time if we will start to change the way the second container, it referred to the container can be made with IP IP b1 vessel B2 coincide, i.e. shared IP, but do not share the file system.

[root@localhost ~]# docker run -it --name b2 --rm --network container:b1 busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

At this point we create a directory on the b1 container

/ # mkdir /tmp/data
/ # ls /tmp/
data

B2 container to check the / tmp directory and did not find this directory, because the file system is in a state of isolation, where shared networks only.

Deploy a site on the b2 container

/ # ls /tmp/
/ # echo 'hello world' > /tmp/index.html
/ # ls /tmp/
index.html
/ # httpd -h /tmp
/ # netstat -antl
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 :::80                   :::*                    LISTEN   

On b1 container with a local address to access this site

/ # wget -O - -q 127.0.0.1:80
hello world

Thus, the relationship between containers in container mode is equivalent to two different processes on a single host

Configuration 2.4 Host mode

Direct specification startup mode host vessel

[root@localhost ~]# docker run -it --name b2 --rm --network host busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:88:34:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.249.131/24 brd 192.168.249.255 scope global dynamic ens33
       valid_lft 1261sec preferred_lft 1261sec
    inet6 fe80::ad8c:10d3:b579:8614/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue 
    link/ether 02:42:38:4e:31:46 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:38ff:fe4e:3146/64 scope link 
       valid_lft forever preferred_lft forever

At this point if we start a http site in this container, we can directly access IP host machine directly in the browser, the site of the container.

The container of common operations

3.1 View hostname container

[root@localhost ~]# docker run -it --name t1 --network bridge --rm busybox
/ # hostname
80cbf4b742fa

3.2 host name when injected into the container starts

[root@localhost ~]# docker run -it --name t1 --network bridge --hostname liping --rm busybox
/ # hostname
liping
/ # cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.2	liping     # 注入主机名时会自动创建主机名到IP的映射关系
/ # cat /etc/resolv.conf 
# Generated by NetworkManager
search localdomain
nameserver 192.168.249.2    # DNS也会自动配置为宿主机的DNS
/ # ping www.baidu.com
PING www.baidu.com (14.215.177.38): 56 data bytes
64 bytes from 14.215.177.38: seq=0 ttl=127 time=70.037 ms
64 bytes from 14.215.177.38: seq=1 ttl=127 time=42.033 ms

DNS 3.3 to manually specify to use the container

[root@localhost ~]#  docker run -it --name t1 --network bridge --hostname liping --dns 114.114.114.114 --rm busybox
/ # cat /etc/resolv.conf
search localdomain
nameserver 114.114.114.114
/ # nslookup -type=a www.baidu.com
Server:		114.114.114.114
Address:	114.114.114.114:53

Non-authoritative answer:
www.baidu.com	canonical name = www.a.shifen.com
Name:	www.a.shifen.com
Address: 14.215.177.38
Name:	www.a.shifen.com
Address: 14.215.177.39

3.4 manually to / etc / hosts file inject host name to IP address mapping

[root@localhost ~]# docker run -it --name t1 --network bridge --hostname liping --add-host www.a.com:1.1.1.1 --rm busybox
/ # cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
1.1.1.1	www.a.com
172.17.0.2	liping

3.5 open container port

When executed docker run has a -p option, the application container port can be mapped to a host machine in order to achieve an external host can allow access to internal applications through a port container to access a host of purposes.

The -p option can be used multiple times, it can be exposed container ports must be in really listening.

-p option of using the format:

  • -p
    • The container designated port mapped to the host address of all ports of a dynamic
  • -p <port host>: <port container>
    • The container port Mapped to the specified host port
  • -p ::
    • The specified container port Mapped to the host specified Dynamic port
  • -p
    • The specified container port Mapped to the host specified The port

Random dynamic port refers to a port, the specific mapping result using docker port command.

[root@localhost ~]# docker run --name web --rm -p 80 nginx

After executing the above command will always occupy the front end, we opened a new terminal connected to a port on what look at the container port 80 is mapped to the host machine

[root@localhost ~]# docker port web
80/tcp -> 0.0.0.0:32768

Thus, the 80-port container is exposed to the 32768 port on the host machine, this time we visit on the host machine to see if access to the container port to see the site

[root@localhost ~]# curl http://127.0.0.1:32768
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

iptables firewall rules are created automatically generates portable container, remove the portable container automatically delete rules.

The vessel to port mapping the specified IP port random

[root@localhost ~]# docker run --name web --rm -p 192.168.249.131::80 nginx

See port mapping in the case where another terminal

[root@localhost ~]# docker port web
80/tcp -> 192.168.249.131:32768

The vessel port mapping to a host machine specified port

[root@localhost ~]# docker run --name web --rm -p 80:80 nginx

See port mapping in the case where another terminal

[root@localhost ~]# docker port web
80/tcp -> 0.0.0.0:80

3.6 custom attribute information network bridge docker0

Attribute information from the network defined docker0 bridges need to modify the configuration file /etc/docker/daemon.json

{
    "bip": "192.168.249.125/24",
    "fixed-cidr": "192.168249.125/25",
    "fixed-cidr-v6": "2001:db8::/64",
    "mtu": 1500,
    "default-gateway": "10.20.1.1",
    "default-gateway-v6": "2001:db8:abcd::89",
    "dns": ["10.20.1.2","10.20.1.3"]
}

The core options bip, namely bridge ip meaning, is used to specify its own IP address docker0 bridge; other options can be calculated by this address.

3.7docker remote connections

dockerd daemon C / S, which listens least Unix socket address format (/var/run/docker.sock), if you are using TCP Alternatively, /etc/docker/daemon.json you need to modify the configuration file, add the following, and then restart the docker services:

"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]

On the client is directly transmitted to dockerd "-H | --host" option specifies the docker container on which host control

docker -H 192.168.249.249.131:2375 ps

3.8docker create a custom bridge

Create an additional custom bridge, different from docker0

[root@localhost ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
2166306bfc82        bridge              bridge              local
6dc26f122a07        host                host                local
16b57e624a73        none                null                local
[root@localhost ~]# docker network create -d bridge --subnet "192.168.2.0/24" --gateway "192.168.2.1" br0
237a1cafe6e03a8f6d6b1eb4f7b26edc8928540b97bf6c03bebfb47b3a3edfa7
[root@localhost ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
237a1cafe6e0        br0                 bridge              local
2166306bfc82        bridge              bridge              local
6dc26f122a07        host                host                local
16b57e624a73        none                null                local

Use the newly created custom bridge to create a container:

[root@localhost ~]# docker run -it --name b1 --network br0 busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
35: eth0@if36: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:c0:a8:02:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever

Re-creation of a container, use the default bridge:

[root@localhost ~]# docker run --name b2 -it busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
37: eth0@if38: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

Imagine, this time b1 and b2 can communicate with each other? If you can not realize how this communication?

Guess you like

Origin www.cnblogs.com/liping0826/p/12641296.html