Multi-card technology in layman's language bindings

In the storage system in order to increase the throughput of downstream devices, often you need to use multiple network adapters bonding technology. For example, back-end storage to provide a maximum throughput of 300MB s / and therefore require concurrent transmission 3 1Gps card before they can reach peak performance. So, how to achieve multi-network adapter binding from R & D point of view it? Here in-depth analysis to increase the throughput of multi-purpose NIC teaming technology.

Multi-NIC teaming on the one hand can increase network throughput, it can also enhance network availability. Here binding technology for multi-card high-availability applications not discussed. From a software perspective, the multi-adapter binding really only need to provide an additional bond to the driver, multiple NICs can actually shield the virtual NIC driver, for TCP / IP protocol layer there is only a Bond card. Load balancing of network traffic (load balance) Bond the driver, a relocation request to a different network card (NIC) on, in order to improve the overall performance of the network. Multi-level NIC teaming software architecture as shown below:
  www.2cto.com  

Consider, in order to bind multiple network cards, its technical difficulty is what? Network switch is routed through the physical address of the port, for ordinary IEEE802.3ab switches do not support the agreement, which can only connect different physical address of the network card, otherwise the switch will not work. Each adapter includes a network IP address, each IP address and a MAC address of the ARP protocol and bound together by. From this analysis, in cooperation with no switches, it is difficult to achieve perfect communication model multi-NIC teaming imagination:

If the ARP protocol to support a mapping of IP addresses to the plurality of MAC addresses, we can drive multi-layer card shown in FIG binding on communication model in Bond. In the actual IP network, in order to achieve the above binding effect, we need to switch 802.3ab protocol support, through the switch NIC teaming round robin fashion to achieve pure software method can not be done. Multi-card based 802.3ab protocol bindings as shown below:

在802.3ab的支持下,Server端所有网卡的MAC地址全部配置成一个,例如MAC-A,然后在交换机端将这些端口聚合起来。交换机在接收到数据报文之后会轮询这些端口将数据报文发送给Bond驱动,Bond驱动想发送数据的时候同样通过轮询方法将数据报文交给不同网卡进行传输。这种方法可以对Client端进行透明,从整体架构上来说比较简单,唯一缺点是需要交换机的支持。个人认为对于存储系统而言,这种方案是目前网卡绑定的最佳方式。
  www.2cto.com  
除了交换机支持的解决方案之外,当然我们也可以采用纯软件的方法,只不过这种绑定在有些应用中有所局限。一种比较简单的绑定思想是只对Transmit进行绑定(Linux Bond5 Mode)。该绑定模型如下图所示:

在这种模型中,Client向Server发送ARP请求,Server将Bond适配器的MAC地址告诉给Client。Bond适配器驱动程序会选择一个Slave-NIC的MAC地址作为自己的MAC地址。例如,Client-B向Server发送ARP请求,Server会将MAC-C地址告诉给Client-B。因此,Client端发送的数据报文会全部被MAC-C地址所在的NIC接收。换句话说,所有Client了解到Server的MAC地址都是MAC-C,Client看不到Server端其他NIC的MAC地址。在Server端发送数据报文的时候,Server端封装的Source-Address都是Bond适配器配置的MAC地址,目的地址是Client NIC所在的MAC地址。在数据发送的时候,Bond驱动程序会根据一定算法将发送报文均匀分配到所有NIC上,由于NIC驱动不会更改以太网报文中的内容,通过这种方式,Bond驱动可以充分利用所有网卡的物理带宽。这种方法实现简单,Bond驱动程序不需要截获传输报文的任何数据,只需要选择网卡进行数据传输即可。由于Client只知道一个网卡的物理地址,所以无法实现多网卡的并发数据接收,只能实现并发数据发送。对于只关注数据发送吞吐量的应用,这种解决方案还是非常有效的。
  www.2cto.com  
在存储应用中,需要考虑双向数据传输的吞吐量,上述解决方案存在数据接收瓶颈。那么如何才能使得Server进行高效的双向数据传输呢?其关键问题在于需要让Client知道不同网卡的MAC地址,上述解决方案中,所有的Client只知道一个MAC地址。为了达到双向传输的效果,Bond驱动程序需要截取ARP报文和数据报文,并且修改MAC地址。这种网卡绑定的数据传输模型(Linux Bond6 Mode)如下图所示:

 在Client向Server发送ARP请求的时候,Bond驱动程序会截取返回给Client的ARP应答报文,并且选取一个Slave-NIC的MAC地址,用这个MAC地址修改ARP应答报文。通过这种方式,Server将一块网卡分配给一个Client,每个Client分配得到不同NIC的MAC地址,从而Client可以通过不同的网卡实现数据传输。在多Client的情况下,实现Server端并发数据接收。

在Server端数据发送的时候,Bond驱动程序需要截取发送数据报文,并且修改以太网报文中的源MAC地址(bond_alb_xmit函数实现了Bond的数据发送功能),然后通过算法选择一块NIC发送数据,实现并发数据传输(数据报文发送并不固定在一块网卡上,上图的演示模型只是一个特例)。这种方案和上述方案最大的不同是Bond驱动程序需要截获ARP和正常的数据报文,并进行处理。从Client而言,每个Client的数据发送可以绑定到一个单独的NIC卡上。

这里对多网卡的绑定技术进行了一些总结,这些技术还是值得慢慢品味的。

 

Guess you like

Origin www.cnblogs.com/zqyanywn/p/11386728.html