【转】What RDMA hardware is supported in Red Hat Enterprise Linux?

原文链接:https://access.redhat.com/solutions/22188

What RDMA hardware is supported in Red Hat Enterprise Linux?

 SOLUTION 已验证 - 已更新 2016年六月2日21:58 - 

English 

环境

  • RHEL 4.8 or later
  • RHEL 5.4 or later
  • RHEL 6.0 or later
  • RHEL 7.0 or later

问题

  • What InfiniBand/iWARP/RoCE/IBoE/OPA hardware is supported in Red Hat Enterprise Linux (RHEL)?

决议

InfiniBand, iWARP, RoCE, and OPA are all different types of RDMA capable hardware and as such are collectively referred to as simply RDMA hardware.

InfiniBand hardware implements the InfiniBand link layer and provides access to the InfiniBand Verbs API as the means of communication. The InfiniBand Verbs API requires a lossless link layer and the InfiniBand link layer was designed to provide that guarantee.

iWARP hardware is regular Ethernet hardware at its core. iWARP hardware uses TCP to provide a lossless transport over which a subset of the InfiniBand Verbs API is implemented. This makes iWARP routable over long distances, but also makes it susceptible to unpredictable latency spikes if the underlying Ethernet transport suffers any lost packets and the TCP retransmit mechanism is forced to come into action.

RDMA over Converged Ethernet (RoCE) (also known as InfiniBand over Ethernet (IBoE) in certain upstream kernels) is regular Ethernet hardware at its core as well. However, it uses extensions to Ethernet that are part of the Data Center Bridging extensions (specifically, Per Priority Flow Control, or PFC for short) or global pause fram support (lower performance than PFC, but easier to set up and configure both on the hosts and the switches) to provide a lossless communication. This means that RoCE is not susceptible to the same TCP retransmit issues as iWARP, but it also means that it can only be routed as far as the switches that provide PFC are deployed, and can not be routed over wide area links as a general rule.

Omni-Path Architecture (OPA) is a cluster fabric from Intel that implements an InfiniBand Verbs like API in the hardware, and hooks into the main InfiniBand stack in the kernel and emulates an InfiniBand Verbs device to the kernel. In most ways OPA can be considered as mostly identical to InfiniBand, but the link layers are different and one can not interconnect OPA and IB networks.

All RDMA hardware require both a kernel driver and a user space driver in order to operate.  The following list gives first the kernel driver name and then the user space library name that enables the specific hardware in question. Not all of these drivers are present in all of the versions of Red Hat Enterprise Linux that support RDMA as some of the drivers are newer and not present in older releases:

Kernel Modules User Space Package Name Description of Hardware Driven
ib_mthca libmthca InfiniBand only - All Mellanox based hardware prior up to the original ConnectX DDR line of hardware.
mlx4_core, mlx4_ib, mlx4_en libmlx4 InfiniBand and RoCE - All Mellanox ConnectX-2, ConnectX-3, and ConnectX-3 Pro InfiniBand, Ethernet, and combination adapters.  This includes ConnectX adapters rebranded by various hardware vendors (HP, Dell, Cisco, Topspin to name a few).
mlx5_core, mlx5_ib, mlx5_en libmlx5 InfiniBand and RoCE - All Mellanox branded and hardware vendor rebranded ConnectIB, ConnectX-4, and ConnectX-4LX hardware (only supported in RHEL 6.6 and RHEL 7.0 or later).
be2net, ocrdma libocrdma RoCE/IBoE - Emulex RoCE adapters (only supported in RHEL 6.6 and RHEL 7.1 or later).
ib_ipath, ib_qib libipathverbs, infinipath-psm InfiniBand only - Intel (formerly QLogic) InfiniPath adapters (x86_64 architecture only).
hfi1 libhfi1verbs, libpsm2 Omni-Path Architecture only - Intel's first OPA adapter (x86_64 architecture only).
cxgb3, iw_cxgb3 libcxgb3 iWARP only - Chelsio iWARP adapters based on T3 hardware
cxgb4, iw_cxgb4 libcxgb4 iWARP only - Chelsio iWARP adapters based on T4 and T5 hardware
iw_nes libnes iWARP only - NetEffect iWARP adapters.
i40e, i40iw libi40iw iWARP only - Intel X722 adapter iWARP support (very new, only in rhel7.3 or later)
ib_ehca libehca InfiniBand only - IBM Galaxy and Galaxy2 InfiniBand adapters (IBM PowerPC architecture only).  Note: support for this hardware was disabled in RHEL4.9.  Users with this hardware will need to stay on the RHEL4.8 kernel for this hardware to keep working.  There is no such issue in RHEL5 or later, all versions have support for this hardware enabled.

NOTE: these user space libraries are very closely tied to the kernel hardware driver module, and as such they must be used with the kernel series they are intended to be used with. This means that if you are running a RHEL 5.1 system with a RHEL 5.1 kernel, and the RHEL 5.2 libcxgb3 has a bug fix you need, upgrading the libcxgb3 library to the RHEL 5.2 version without also upgrading to the RHEL 5.2 kernel is *not* supported and not guaranteed to work at all. The kernel portion of the infiniband stack and the entire user space portion of the infiniband stack are subject to change from point release to point release and are not guaranteed to work properly when components from different releases are mixed and matched.

猜你喜欢

转载自blog.csdn.net/msdnchina/article/details/83540262