Neutron introduce the principle components of the underlying implementation

1 Linux underlying virtualization Equipment

1.1 tun / tap

tun and tap the Linux operating system kernel virtual network devices, to achieve tun / tap device kernel modules tun.

tap is equivalent to an Ethernet equipment (NIC), works at the data link layer, the network layer TUN simulated devices (peer devices), working in the IP layer, using tun / tap driver tcp / ip network protocol can be handled well points package passed to the process using the tun / tap in.

May be understood as user space program may send messages to the tun / tap opening as sending packets to the physical port that, tun / tap device transmits (or injection) packet to the OS protocol stack (here OS protocol stack of the virtual machine) as if the message received from the physical port the same.

 

Add a tun / tap in the Linux kernel virtual network device drivers and associated with a character device / dev / net / tun, tun character device kernel space and user space data exchange as (space qemu processes running) interface. Applications such as in user space qemu program can interact with tun / tap driver in the kernel by the character device. When the kernel sends the packet to virtual network device, the packet is stored in a queue associated with the apparatus, until the user program to read through the open space character device tun descriptor, it will be copied to the user space buffer, which is equivalent to the effect, the packet is sent directly to the user space (host user space, not a virtual machine user space).

 

tun / tap driver consists of two parts: a character device driver (in the host to the user space and kernel space mutual transfer of information) and a network card driver (also in the inner nuclear layer). Using the NIC driver portion receives from the sub-network tcp / ip or protocol stack and in turn transmits the received sub-network protocol stack processing pass. And the character sub-network device driver will be transmitted, the analog data receiving and transmitting the physical link between the kernel and user space. tun / tap driver to achieve a good two drive combination.

 

tap is generally used as a virtual machine's virtual network card device, tun generally used vpn tunnel.

tap / tun to the communication network and protocol stack, can tun / tap as data pipeline implemented by the respective NIC driver, which connects to the host protocol stack, and the other end connected to the user program, the user-level program can be seen as the network another host, they are connected by tap virtual network card. Processes and general physical NICs exchange procedure and protocol stack is the same, except that one end is connected to the physical NIC physical network, while the tap / tun virtual NIC typically connected to the user space.

 

TUN and TAP apparatus difference lies in their working levels of the protocol stack, TAP is equivalent to an Ethernet device, user-level programs to read and write data packets is a two-tap device such as an Ethernet data frame, tap device is most commonly used as a virtual machine card. TUN the simulated network layer device, operation of the third layer packets such as IP packets, using OpenVPN TUN device to establish a VPN tunnel between the C / S

 

Examples of virtual machine sends data through the virtual card and external network:

(1) the virtual machine transmits data through its NIC eth0 outwardly, from the host point of view, is the user-level program qemu-kvm process file descriptor (FD) 26 to write data to the character P2 equipment / dev / net / tun - > write (fd, ...) -> N

(2) the file descriptor associated with the virtual NIC 26 tap0, that is to say the host receives data from N tap0 card -> E

(3) tap0 upper bridge br0 interface, Bridging decision is required to determine how to forward the packet E -> D

(4) P2 communicate with an external host network other, thus forwards the data to br0 em2.100, and finally emitted from the physical NIC D em2 -> C -> B -> A

 

As can be seen in this process, the data sent by the virtual machine virtual NIC tap0 injected directly into the host processing logic link layer bridges and then forwarded to the external network, the packet does not pass through the upper layer host protocol stack, and therefore work on the host iptables kernel protocol stack in the IP layer is not a packet filtering virtual machine

 

Example Tun Equipment:

TUN device is connected to the host kernel IP protocol stack layer

(1) openvpn client uses to access web services (not shown in Figure client outside the network)

(2) client to start openvpn client processes connected openvpn server

(3) server routing entries sent to the client machine routing table, while generating a virtual network adapter tun1 (tun device, openvpn client process and openvpn server will be registered as tun virtual network adapter, server is registered in the tun0 virtual NIC)

(4) Client Access web service through a browser

(5) the browser generates routing data packets in the IP layer of the protocol stack, through virtual NIC decides to issue tun1

(6) connected to the other end of the virtual NIC tun1 openvpn client user-level process

(7) openvpn client process receives the original request packet

(8) openvpn client packages the original request packet, transmitted by udp vpn protocol packet 9112 to the port A on openvpn server -> T -> K -> R -> P1

(9) openvpn openvpn server process on the packet received vpn, unpack, write data to the file descriptor 6 / dev / net / tun P1 -> write (fd, ...) -> N

(10) the file descriptor associated with the virtual card tun0 6, N host receives a packet from the NIC tun0 -> H ---> I

(11) host Routing decision, according to the packet's destination IP (IP address of the user to access web site) emitted from the respective card I -> K -> T -> M -> A

 

In fact, it can be seen that the service process across the packet to a packet unpacking process, to achieve a middleman function, web server package has been received is unzipped, so often we see with this technology to give companies headquarters network service access unit, and because of the increase routing entries, in order to allow independent channels and determining the direction of data flow, coupled with SSL protocol data packet is encrypted so that secure data transmission, the packet is transmitted using a common or udp tcp protocol to transmit.

 

1.2  LinuxBridge

Bridge work virtual switch Layer protocol layer in the Linux kernel to simulate the functions of a physical switch, the network devices may be added to the number of interface Bridge, the Bridge is added to the device is arranged to receive the data frame and forwards only Layer All received data packet to the Bridge, the Bridge internal MAC maintains a table, which can forward the data packet to another interface or sent to the upper layer protocols (or discarded, and broadcast).

NIC is added to the IP configuration of the Bridge is not working in the data link layer, the routing system is not visible, but can be configured IP address to the bridge. Bridge only to set up IP as a routing interface device, to participate in the routing of the IP layer, will it be possible to send data packets to the upper layer protocol stack.

 

Bridge illustrates processing for data packet flow, an external network (A) sent to the virtual machine qemu-kvm (P2) during the data flow:

(1) first data packets from em2 (B) into the physical network card, em2 then forwards the data packet to its child devices em2.100 vlan

(2) via Bridge check (L) belonging to the found sub-device em2.100 bridge interface device, so the packet is not sent to the upper layer protocol stack (T), but the code into the bridge processing logic, so that data packets from em2.100 (C) connector into the br0

(3) After Bridging decision (D) discovery packets should be sent from the interface tap0 (E), this time the packets leave the host network protocol stack (G), opened by user-space process qemu-kvm character device / dev / net / tun (N) user data packets destined for space

(. 4) Read a character from a transmitting device to read data packets over the core layer qemu-kvm process system calls

(5) the data stream to A -> B -> C - > E -> G -> N -> P2

If the packet is entered from the host em1 card (M), the Bridge check (L) passes, the non-bridge interface em1 found, then the packets will be sent (T) IP protocol stack layers, to determine the link Routing decision destination packet (a -> M -> T -> K)

 

The above (3) step with an operation of a Bridging decision, which is to make some of the decisions based on the MAC address of the packet:

(1) the packet destination MAC address is the MAC address of Bridge itself (when provided br0 IP address), MAC layer address this point of view, to the host PC receives its own packet to the upper layer protocol stack (D -> J) from

(2) broadcast packet (the destination address is broadcast address), forwarded to all interfaces (br0, tap0, tap1, tap ...) on the Bridge

(3) is present in the MAC unicast && port mapping table, look-up table are forwarded directly to the corresponding interface (such as D -> E)

(4) && unicast MAC port does not exist in the mapping table, flooding to all interfaces connected Bridge (br0, tap0, tap1, tap ...)

(5) a packet's destination interface is not the bridge interface (the interface is not connected to the bridge is a situation that will be a destination address, such as broadcast flooding no one answered), the bridge does not deal with, to the upper layer protocol stack (D -> J)

 

1.3 Veth-pair

First understand the Linux network namespace concept, Linux kernel provides six types of namespaces: pid, net (network namespace), mnt, uts, ipc and user, Linux network namespace is just a Linux command space, network namespaces have their own independent network resources, such as network interface cards, routing tables, iptables rules.

The global namespace system resources packed into an abstract, the abstract will be bound to the namespace of the process to provide resource isolation (namespace and cgroups (a measure limiting mechanism and can be understood as something similar quota management ) is one of the main core technology most new trends in software containerized (Docker) a).

 

You can create a namespace using the network command ip netns:

ip netns add ns1

VETH apparatus always come in pairs, to the end of the requested data is always transmitted from the other end in the form of the received request occurs. veth work in the data link layer L2, veth-pair data packet forwarding devices are not tampered during packet content.

Veth working principle is: the data will change direction and into VETH core network subsystem, the data is completed injection, at the other end is able to read this data, somewhat similar to the pipe.

veth-pair networks are often used to connect two namespaces

 

doubt:

(1) Openstack is how to address the mac and ip

Openstack port concept is defined and recorded in the database, can be seen as a port on the virtual switch port, MAC address defined mac address (virtual machine displayed on the port, the host address is not displayed in the tap (this tap only created if the virtual machine starts to mount, after the close of deleted)) and IP address, when the instance of VIF (virtual NIC) is bound to port, port will be assigned to the VIF MAC address and IP address, kernel to create a tap according to information provided by the port,

After seeing the bridge to create a general, if the above to create a tap device, tap the device is (not dhcp service on the compute nodes, so execution will not brctl show dhcp corresponding tap device on the compute nodes) of dhcp

 

2 Neutron network model Introduction

There local network model mode, flat mode, VLAN mode, vxlan mode and mode gre

 

2.1 Local network mode

The network model is not connected to any physical NIC host computer, each local network will maintain a Bridge, the card will be the same in the virtual machine is connected to a local network under one Bridge, so only the same local network with the host computer between the virtual machines can communicate with each other, the other is isolated.

 

2.2 Flat network mode

Flat network is a network without a tag, the physical requirements of the host card is connected directly to the bridge, which means that each network requires exclusively a flat physical NIC.

 

Openstack创建一个网络后通常都需要创建一个子网,即指明该网络的IP池和网关,并且设置是否要支持dhcp服务等。如果支持dhcp服务那么也会有一个tap接口连接到网桥中用以数据包通信来提供dhcp服务。比如刚创建的flat网络如果勾选了dhcp服务,那么brctl里就能看到该flat网络对应的网桥里除了物理网卡设备连接外,还有一个tap也连接着,它就是dhcp服务的tap。

 

计算节点是没有dhcp服务的,所以它是没有多一个设备连接到计算节点的网桥上的,但是在计算节点上的虚拟机在获取IP和MAC时,它可以通过控制节点上的dhcp服务来获取,因为他们在同一个network上,计算节点上发出的dhcp请求广播包在控制节点的网桥上也是可以收到的,然后由控制节点的网桥上的dhcp服务的设备接收并处理。

控制节点和计算节点各一个虚拟机分别连接到同一个flat网络时:

 

可以看到这两个虚拟机通信时是通过了物理网卡之间的连接进行的通信,假设是计算节点的虚拟机ping控制节点的虚拟机,首先数据包在计算节点的网桥里没找到目的端口由eth1将数据包发出去,计算节点的eth1和控制节点的eth1假设是在同一个交换机上(如果不是,则会走路由由网关进行数据包转发),该数据包会进行广播,控制节点的eth1收到该数据包发现目的IP有在自己网桥上,并将该数据包发给网桥上对应的接口即控制节点的虚拟机的tap。

 

疑问:

(1)为啥flat网络的物理网卡没有设置ip,怎么出外网的?

物理网卡连接到了网桥接口中,连接的口相当于是交换机的trunk口,通向外网的包会从这个物理网口出去。这里还是二层的数据包转发,不用设置IP。

 

2.3 Vlan网络模式

Vlan network是带tag的网络,是实际应用最广泛的网络类型。

直接看下图:

 

可以看到我们创建了两个vlan,100和101,一端连接到网桥接口上,另外一端连接到物理网卡eth1上,以eth1.100举例,VM0和VM1的数据包从eth1出去时是带着tag为100的标签的,同理VM2的则是带着tag为101的标签,这些标签是在经过eth1.100(或eth1.101)时打上的标签。因为物理网口eth1经过了两个vlan段,所以它所连接的交换机的口应该设置成trunk口,而不是Access口。

创建eth1.100和eth1.101可以使用vconfig来创建,

看下两个物理主机下的情况:

 

可以看见计算节点和控制节点的网桥名是一致的,表明是同一个私有网络。但因为vlan的隔离,vlan100和vlan101的虚拟机是不能相互通信的,如果需要通信需要三层的转发,也就是可以通过虚拟路由来实现。

 

疑问:

(1)为啥在某台机器网络是8网段,网关是8.1,要在一个交换机上设置了vlan 8才可以通外网呢?

我觉得应该是在交换机上设置的一个约定,比如你的数据包从这个端口出(假设一开始所有端口都是vlan 1的),那么它会检查你的ip是不是符合它设置的1网段的,如果不是那么直接丢弃,所以通不了外网。

举例:

交换机有2个vlan
int vlan10
ip add 10.1.1.2 255.255.255.0

ip default-gateway 10.1.1.1


int vlan 20
ip add 20.1.1.2 255.255.255.0
ip default-gateway 20.1.1.1  

看个实际的操作截图例子:

 

可以看到对于每一个vlan其都有设定对应的网络地址。且还有设置网关,这里设置一个网关不用说是要在交换机上配置成这个网关,它只需简单的比如增加一条路由项:ip route 192.168.8.1 192.168.1.1,那么它出外网是就是把它路由到了192.168.1.1上去处理了(简单理解就是这里设置网关只是为了告诉交换机怎么转发这个数据包出去)。

所以上面的vlan 8和ip是不是8网段是没有绝对关系的,一切都要看交换机里的设置,是它判断vlan 8里的设定(比如设定可能是ip addr 192.168.8.2 255.255.255.0)是不是跟你包的ip在同一网段。如果vlan 8里的设定是ip addr 192.168.9.2 255.255.255.0,那你的ip是192.168.8.100/24是出不了的。

 

(2)为啥两个网卡设置了相同网段后,有一个网络会不通了?

我们观察到的现象就是后面的那个是可以ping通的,原因在于路由表

你用route命令查看路由变,可以看到类似以下这样的表项:

192.168.8.0     0.0.0.0     255.255.255.0   U     0      0     0   eth2

192.168.8.0     0.0.0.0     255.255.255.0   U     0      0     0   eth1

当外面ping eth1对应的IP的时候,当一个ping包到达后要回应时,它总是从eth2出去的(但其实我们是要eth1出去的),但eth2的IP并不是我们ping的包了

 

解决方法可以是改路由表项,比如专门为eth1建一个路由表项:

192.168.8.100     0.0.0.0     255.255.255.0   U     0      0     0   eth1

或者给每个网卡分配单独的路由表。并且通过 ip rule 来指定:

(3)具体tag记录在哪个位置

记录在以太网帧上,是数据链路层的概念

以太网帧格式:

|-----------------------------------------------------------------------------| 

| DMAC(6bytes) | SMAC(6bytes) | Ether-Type(2bytes) | DATA | 

|-----------------------------------------------------------------------------| 

 

802.1Q VLAN定义了有tag的数据帧的封装格式,即在以太网帧头中插入了4个字节的VLAN字段。

带VLAN TAG的以太网帧格式 :

|------------------------------------------------------------------------------------------ --- --- ---| 

| DMAC(6bytes) | SMAC(6bytes) | Ether-Type(0x8100) | VLAN(4bytes) | DATA | 

|------------------------------------------------------------------------------------------ --- --- ---| 

 

VLAN TAG的格式 :

|---------------------------------------------------------------------------------| 

| PRI(3bits) | CFI(1bit) | TAG(12bits) | Ether-Type(2bytes) | DATA | 

|---------------------------------------------------------------------------------| 

PRI:帧优先级,就是通常所说的802.1p。

CFI:规范标识位,0为规范格式,用于802.3或EthII。

TAG:就是我们通常说的VLAN ID 

Ether-Type:标识紧随其后的数据类型。

 

2.4 Vxlan网络模式

vxlan是overlay网络,overlay网络是指建立在其它网络上的网络。

目前linux-bridge只支持vxlan,open vswitch则可支持vxlan和gre网络。

Vxlan的全称是Virtual eXtensible Local Area Network,它也是提供以太网二层服务的。

 

相比Vlan,Vxlan的优势:

(1) 支持更多的二层网段:VLAN使用12bit标记vlan id(最多支持4094个),Vxlan使用24bit标记vxlan id(vnid,支持16777216个)

(2) 能更好利用已有网络路径:VLAN为了避免环路使用了Spanning Tree Protocol导致有一半网络路径不可用,vxlan则利用UDP包通过三层传输,可以使用所有路径。

(3) 避免物理交换机MAC表耗尽:由于采用隧道机制,交换机无需在MAC表中记录虚拟机的信息

 

Vxlan的包格式是在链路层包前面加上8字节的Vxlan Header(vnid占24位):

 

通过UDP的传输,Vxlan能够在三层网络上建立一条二层的隧道。

Vxlan使用VTEP进行Vxlan的解包和封包,每个VTEP都会有一个IP接口,配置一个IP地址,VTEP使用该IP来封装链路帧包且通过该IP传输和接收Vxlan数据包。

 

两端的 VTEP IP 作为源和目标 IP

VTEP的实现可以是硬件的,也可以是软件的,软件方面的实现:

(1) 带Vxlan内核模块的Linux

(2) Open vSwitch

 

实现方式:

(1) Vxlan内核模块创建一个监听端口为8472的UDP Socket

(2)在Socket上接收到Vxlan包后进行解包并根据vxlan id转发到对应的vxlan interface上,比如vxlan-34(也是连接到网桥上的一个设备),由它把包传给网桥,网桥再发送给虚拟机。


(3)在Socket上接收到要发出去的数据包时则将其封装为UDP包并从网卡发出

 

Vxlan数据包转发流程:

 

Host-A和Host-B可以看作是两个Host上的虚拟机和vxlan-id设备的结合体,VTEP-1和VTEP-2是Vxlan内核模,vxlan-id设备一端连接网桥,一端与内核模块的VTEP通信。

(1) 虚拟机A向虚拟机B发送数据时,数据经过VTEP-1时,VTEP-1从自己的映射表中找到MAC-B对应的VTEP-2设备,以该设备的IP和MAC作为目标IP和MAC来封装,并且加上Vxlan头部。

(2) 数据经过路由到达VTEP-2之后进行解包,依次去掉外层MAC头,外层 IP 头,UDP 头 和VXLAN头。VTEP-2 依据目标 MAC 地址将数据包发送给虚拟机B(这里其实是先转发到网桥上,然后网桥转发给对应的虚拟机)

 

两个主机间同vxlan id间通信图:

 

2.5 配置文件的配置参数值对应的意义解释

配置文件:/etc/neutron/plugins/ml2/ml2_conf.ini

tenant_network_types:指明租户允许创建的网络类型为vxlan类型

mechanism_drivers:指明支持的策略驱动类型,比如除了linuxbridge,还可以是openvswitch,l2population是用来提升vxlan网络模式效率的,其实就是让各主机的VTEP能记录虚拟机的MAC对应的是哪个主机上的VTEP,这样就不用广播arp找虚拟机的MAC了。

type_drivers:指明支持的网络模式类型,用来在服务启动时加载对应的网络模式driver的代码。

 

配置文件:/etc/neutron/plugins/ml2/linuxbridge_agent.ini

enable_vxlan:是否开启vxlan模式

local_ip:指明VTEP设备的IP

l2_population:是否开启该策略优化

 

这里是指明用eth2来作为provider network的网口

 

3  Neutron中的dhcp服务的工作原理

Neutron的dhcp服务是通过dhcp agent服务实现的,它部署在网络节点上,该服务使用了开源的dnsmasq来实现dhcp功能的,dnsmasq是一个提供DHCP和DNS服务的开源软件。

 

Dnsmasq和openstack里的network是一对一的关系,一个dnsmasq可以为一个network里的多个subnet提供服务。

查看我们系统里的dnsmasq服务:

 

红框中的--dhcp-hostsfile选项指定的是记录了已分配的虚拟机的IP和对应的MAC地址,其内容如下:

 

红框中的--interface选项指定的是提供DHCP服务的interface,用来监听虚拟机发来的请求。

我们可以看到它的interface名是ns-de2b2a68-ac,使用ip addr好像没有看到有该设备的存在,那是因为该设备是在其它namespace里,不在宿主机的namespace里(root namespace),但网桥是在root namespace里,dhcp的设备在另外一个namespace里,该怎么连通呢,答案是使用veth-pair来联通,就是又一对接口,一个在dhcp的namespace,一个在root namespace,这样就连通起来了。

可以使用ip netns list命令查看所有namespaces(红框里的就是我们的dhcp的interface所在的namespace):

 

列出该namespace下的网络设备:

ip netns exec qdhcp-dddab6ab-4799-4ba6-bb44-a94b531de34e ip addr:

 

这里怎么看ns-de2b2a68-ac对应的另外一个veth是谁呢,可以这样看:

从上图的红框中我们看到该设备名后面还有@if28,这个28是该设备接口的index,我们还看倒这个设备的前面还有个”2:”,代表它的另外一个veth是设备接口index为2的,我们可以使用如下命令过滤获取:

ip link show | grep 28

 

可以知道对应的设备名是tapde2b2a68-ac

可以看到该设备另一端是连在网桥上的:

 

疑问:为啥需要这么麻烦搞多一个namespace呢?

答案是因为这样才能让不同network的子网可以重叠,而每个dhcp服务的namespace的独立的网络栈就不会受到影响,因为dhcp服务们有自己的route table,firewall rule,network interface device等。

 

一个虚拟机获取其IP和MAC的过程举例:

(1) 当为虚拟机分配一个网卡时,会为它创建一个port,该port会包含该网卡的MAC和IP(记录在数据库里),这些信息也会同步到dnsmasq的host配置文件里

(2) 当虚拟机启动时,会发出DHCPDISCOVER广播,网桥上的tapde2b2a68-ac收到后通过veth pair传到dhcp的namespace中的ns-de2b2a68-ac设备上

(3) Dhcp服务在ns-de2b2a68-ac设备上监听到了该请求数据包,然后查它的host文件并把对应的网络信息比如IP和MAC等返回回去,以DHCPOFFER 消息的形式返回回去

(4) 虚拟机收到该DHCPOFFER 消息后回复DHCPREQUEST 表示接受此DHCPOFFER 

(5) Dnsmasq收到DHCPREQUEST 后发送DHCPACK确认消息,分配过程结束

 

拓扑图:

 

4  虚拟路由器原理

我们知道两个不同子网如果需要连通在二层上是不能实现的,那么就需要三层的介入,即数据包的路由,这个功能由路由器设备实现。可以使用物理的路由,也可以使用虚拟路由来实现。

虚拟路由的实现由L3 agent服务负责,它的实现原理是使用IPtables定义转发规则。

L3 agent服务运行在网络节点上。跟dhcp服务实现思想类似,为了让不同network的子网可以重叠,所以一个route也会创建一个namespace,然后创建一对veth-pair分别连接两个namespace来通信。因为一般是连接的两个子网,所以会有两个网关设备被创建在router的namespace里,所有会创建两对veth-pair,然后在router的namespace再设定路由规则,使得到达两个网关设备的数据包能够互相路由,这样数据包就能通了。

 

举例:

两个子网分别是172.16.100.0/24和172.16.101.0/24,网关分别是172.16.100.1和172.16.101.1

查看router里面的设备:

ip netns exec qrouter-1f9c082f-9d97-471f-a41b-afe90fd62c0d ip a

 

可以看到两个子网的网关IP都在这个network namespace里。

查看router里面的路由规则:

ip netns exec qrouter-1f9c082f-9d97-471f-a41b-afe90fd62c0d route -n

有上面两条红线标着的路由规则,这两个子网间的数据包就可相互路由了。

 

拓扑图:

 

因为浮动IP也是跟私有网络的子网的所连接的router有关的,这里扩展下浮动IP的工作原理:

浮动IP其实是依附于在router的namespace空间上的,浮动IP与私有网络的IP是一一对应的,假设虚拟机A的私有网络的IP是172.16.100.12,分配给它的浮动IP是192.168.100.236,则这两个IP在router的namespace里是有设定一些NAT转换规则的。如下:

ip netns exec qrouter-1f9c082f-9d97-471f-a41b-afe90fd62c0d iptables -t nat -S

 

可以看到这里设定了SNAT规则和DNAT规则,起到的作用就是:

(1)当router接收到从外网发来的包,如果目的地址是floating IP 192.168.100.236,将目的地址修改为虚拟机的的IP 172.16.100.12。这样外网的包就能送达到虚拟机了

(2)当虚拟机发送数据到外网,源地址 172.16.100.12 将被修改为 floating IP 192.168.100.236

所以浮动IP它并不是attach到虚拟机里去了,只是在虚拟路由器的namespace里做了一些NAT。

 

疑问:

(1)为什么还未指定路由器给交换机连接时,交换机设定的网关ip是不可用的

答案是这些元数据只是记录在数据库里,并没有真正配置到路由器上,待创建router后,这个网关地址就会配置到router的interface上

 

Guess you like

Origin www.cnblogs.com/luohaixian/p/12343851.html