Fun Linux network namespace

Use the command ip netns operating network namespace

Create a network namespace called nstest.

[root@localhost ~]# ip netns add nstest

Lists the system already exists in the network namespace

[root@localhost ~]# ip netns list
nstest

Delete a network namespace

[root@localhost ~]# ip netns delete nstest

Execute a command in the network namespace

[root@localhost ~]# ip netns exec nstest ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
命令格式:
ip netns exec [network namespace name] [command]

Use ip command to configure the network card network namespace.

When you create a network namespace using the ip netns add command, to have a separate network space, you can configure the network space according to the needs, such as adding network cards, configure the IP, set up routing rules, when creating a network namespace using the ip command , it will default to create a loopback device (loopback interface: lo). The device does not start by default, with the best start it.

[root@localhost ~]# ip netns exec nstest ip link set dev lo up

Creating two cards veth-a and veth-b on the host

[root@localhost ~]# ip link add veth-a type veth peer name veth-b

Adding veth-b nstest the device to the network namespace, veth-a remain in the host

[root@localhost ~]# ip link set veth-b netns nstest

Now nstest this network namespace will have two network cards lo and veth-b, to verify it.

[root@localhost ~]# ip netns exec nstest ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: veth-b@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 4e:3c:eb:70:76:76 brd ff:ff:ff:ff:ff:ff link-netnsid 0

You can now assign an IP for the network card and the card starts.

On veth-a host IP configuration and start

[root@localhost ~]# ip addr add 10.0.0.1/24 dev veth-a
[root@localhost ~]# ip link set dev veth-a up

Nstest to configure the IP veth-b and started.

[root@localhost ~]# ip netns exec nstest ip addr add 10.0.0.2/24 dev veth-b
[root@localhost ~]# ip netns exec nstest ip link set dev veth-b up

After two network cards to configure the IP, will generate a route in their network namespace, the look with ip route or route -a command.

[root@localhost ~]# ip route
default via 192.168.1.1 dev ens33 proto static metric 100 
10.0.0.0/24 dev veth-a proto kernel scope link src 10.0.0.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.220 metric 100 
在nstest network namespce中
[root@localhost ~]# ip netns exec nstest ip route
10.0.0.0/24 dev veth-b proto kernel scope link src 10.0.0.2 

These two routes have the meanings indicated destination address for the IP packets are sent from the network 10.0.0.0/24 veth-a and veth-b. Now nstest this network namespace has its own network adapter, IP addresses, routing tables and other information, seems to have become a small virtual machine to test its connectivity to check the configuration is correct.

namespace from the host card veth-a ping nstest network veth-b NIC.

[root@localhost ~]# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.116 ms
^C
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.116/0.272/0.429/0.157 ms

namespace from nstest network veth-b ping NIC card host veth-a

[root@localhost ~]# ip netns exec nstest ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.053 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.051 ms
^C
--- 10.0.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3001ms
rtt min/avg/max/mdev = 0.051/0.070/0.108/0.024 ms

Connecting the two network namespace

Create two network namespace ns1 ns2

[root@localhost ~]# ip netns add ns1
[root@localhost ~]# ip netns add ns2

Creating veth pair devices veth-a, veth-b

[root@localhost ~]# ip link add veth-a type veth peer name veth-b

The card were placed in two naemspace

[root@localhost ~]# ip link set veth-a netns ns1
[root@localhost ~]# ip link set veth-b netns ns2

Start two cards.

[root@localhost ~]# ip netns exec ns1 ip link set dev veth-a up
[root@localhost ~]# ip netns exec ns2 ip link set dev veth-b up

Assign IP

[root@localhost ~]# ip netns exec ns1 ip addr add 10.0.0.1/24 dev veth-a
[root@localhost ~]# ip netns exec ns2 ip addr add 10.0.0.2/24 dev veth-b

Verify connectivity

[root@localhost ~]# ip netns exec ns1 ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.135 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.115 ms
^C
--- 10.0.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.048/0.194/0.480/0.168 ms
[root@localhost ~]# ip netns exec ns2 ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.108 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.112 ms
^C
--- 10.0.0.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.097/0.105/0.112/0.013 ms

If connected via two network namespace pair veth device directly connected through the cable two machines

If there are more network namespace need to be connected, it is necessary to introduce a virtual bridge, and on the same network as docker.

Guess you like

Origin www.cnblogs.com/liujunjun/p/12112455.html