X710-DA4的l3fwd测试

X710-DA4有四个10GE口

两台DELL 730(支持PCIE3.0)各插一张X710-DA4网卡

一台用于发包,一台用于l3fwd

ubuntu-1(pktgen-dpdk,ubuntu17.10)

68:05:CA:32:02:F0

68:05:CA:32:02:F1

68:05:CA:32:02:F2

68:05:CA:32:02:F3

ubuntu-2(l3fwd dpdk17.11,ubuntu17.10)

68:05:CA:32:03:18

68:05:CA:32:03:19

68:05:CA:32:03:1A

68:05:CA:32:03:1B

端口对连

68:05:CA:32:02:F0----68:05:CA:32:03:18

68:05:CA:32:02:F1----68:05:CA:32:03:19

68:05:CA:32:02:F2----68:05:CA:32:03:1A

68:05:CA:32:02:F3----68:05:CA:32:03:1B

驱动与固件版本

root@ubuntu-2:~# ethtool  -i enp5s0f0
driver: i40e
version: 2.4.6
firmware-version: 6.01 0x80003494 1.1747.0
expansion-rom-version:
bus-info: 0000:05:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
upports-priv-flags: yes

查看网卡所在的cpu socket号((或者用 lstopo-no-graphics,见文章最后))

root@ubuntu-2:~# cat /sys/class/net/enp5s0f0/device/numa_node
root@ubuntu-2:~# cat /sys/class/net/enp5s0f1/device/numa_node 
root@ubuntu-2:~# cat /sys/class/net/enp5s0f2/device/numa_node 
root@ubuntu-2:~# cat /sys/class/net/enp5s0f3/device/numa_node 

查看cpu core和lcpu与cpu socket的对应关系
在这里插入图片描述

root@ubuntu-2:~# cat start-l3fwd.sh
export DPDK_DIR=/root/dpdk/dpdk-17.11
export DPDK_TARGET=x86_64-native-linuxapp-gcc
export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
mkdir -p /dev/hugepages
mount -t hugetlbfs hugetlbfs /dev/hugepages
modprobe uio
insmod $DPDK_BUILD/kmod/igb_uio.ko
export RTE_SDK=$DPDK_DIR
export RTE_TARGET=$DPDK_TARGET
cd /root/dpdk/

X710-DA4(将lcpu分配在网卡所在的socket为高性能,否则为低性能,以下用两个l3fwd分别进行了对比测试,先用lcpu=3,5,6,9,15,17,19,21测试低性能,再用lcpu=2,4,6,8,14,16,18,20测试高性能)

$DPDK_DIR/usertools/dpdk-devbind.py -b igb_uio 0000:05:00.0 0000:05:00.1 0000:05:00.2 0000:05:00.3

./l3fwd -l 3,5,7,9,15,17,19,21 -n 4 --proc-type auto --socket-mem 2048,2048  --huge-dir /dev/hugepages -- -p 0xf  -L --config="(0,0,3),(0,1,15),(1,0,5),(1,1,17),(2,0,7),(2,1,19),(3,0,9),(3,1,21)"

#./l3fwd -l 2,4,6,8,14,16,18,20 -n 4 --proc-type auto --socket-mem 2048,2048  --huge-dir /dev/hugepages -- -p 0xf  -L --config="(0,0,2),(0,1,14),(1,0,4),(1,1,16),(2,0,6),(2,1,18),(3,0,8),(3,1,20)"

执行l3fwd,注意路由及目标MAC

root@ubuntu-2:~# sh start-l3fwd.sh   
EAL: Detected 24 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:  probe driver: 8086:1572 net_i40e
PMD: Global register is changed during enable FDIR flexible payload
PMD: Global register is changed during support QinQ parser
PMD: Global register is changed during configure hash input set
PMD: Global register is changed during configure fdir mask
PMD: Global register is changed during configure hash mask
PMD: Global register is changed during support QinQ cloud filter
PMD: Global register is changed during support TPID configuration
EAL: PCI device 0000:05:00.1 on NUMA socket 0
EAL:  probe driver: 8086:1572 net_i40e
PMD: Global register is changed during enable FDIR flexible payload
PMD: Global register is changed during support QinQ parser
PMD: Global register is changed during configure hash input set
PMD: Global register is changed during configure fdir mask
PMD: Global register is changed during configure hash mask
PMD: Global register is changed during support QinQ cloud filter
PMD: Global register is changed during support TPID configuration
EAL: PCI device 0000:05:00.2 on NUMA socket 0
EAL:  probe driver: 8086:1572 net_i40e
PMD: Global register is changed during enable FDIR flexible payload
PMD: Global register is changed during support QinQ parser
PMD: Global register is changed during configure hash input set
PMD: Global register is changed during configure fdir mask
PMD: Global register is changed during configure hash mask
PMD: Global register is changed during support QinQ cloud filter
PMD: Global register is changed during support TPID configuration
EAL: PCI device 0000:05:00.3 on NUMA socket 0
EAL:  probe driver: 8086:1572 net_i40e
PMD: Global register is changed during enable FDIR flexible payload
PMD: Global register is changed during support QinQ parser
PMD: Global register is changed during configure hash input set
PMD: Global register is changed during configure fdir mask
PMD: Global register is changed during configure hash mask
PMD: Global register is changed during support QinQ cloud filter
PMD: Global register is changed during support TPID configuration
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:  probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:07:00.1 on NUMA socket 0
EAL:  probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL:  probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:84:00.1 on NUMA socket 1
EAL:  probe driver: 8086:10fb net_ixgbe
L3FWD: Longest-prefix match selected
Initializing port 0 ... Creating queues: nb_rxq=2 nb_txq=8...Address:68:05:CA:32:03:18, Destination:02:00:00:00:00:00, Allocated mbuf pool on socket 1
LPM: Adding route 0x01010100 / 24 (0)
LPM: Adding route 0x02010100 / 24 (1)
LPM: Adding route 0x03010100 / 24 (2)
LPM: Adding route 0x04010100 / 24 (3)
LPM: Adding route IPV6 / 48 (0)
LPM: Adding route IPV6 / 48 (1)
LPM: Adding route IPV6 / 48 (2)
LPM: Adding route IPV6 / 48 (3)
txq=3,0,1 txq=5,1,1 txq=7,2,1 txq=9,3,1 txq=15,4,1 txq=17,5,1 txq=19,6,1 txq=21,7,1
Initializing port 1 ... Creating queues: nb_rxq=2 nb_txq=8...Address:68:05:CA:32:03:19, Destination:02:00:00:00:00:01, txq=3,0,1 txq=5,1,1 txq=7,2,1 txq=9,3,1 txq=15,4,1 txq=17,5,1 txq=19,6,1 txq=21,7,1
Initializing port 2 ... Creating queues: nb_rxq=2 nb_txq=8... Address:68:05:CA:32:03:1A, Destination:02:00:00:00:00:02, txq=3,0,1 txq=5,1,1 txq=7,2,1 txq=9,3,1 txq=15,4,1 txq=17,5,1 txq=19,6,1 txq=21,7,1
Initializing port 3 ... Creating queues: nb_rxq=2 nb_txq=8...Address:68:05:CA:32:03:1B, Destination:02:00:00:00:00:03, txq=3,0,1 txq=5,1,1 txq=7,2,1 txq=9,3,1 txq=15,4,1 txq=17,5,1 txq=19,6,1 txq=21,7,1
Initializing rx queues on lcore 3 ... rxq=0,0,1
Initializing rx queues on lcore 5 ... rxq=1,0,1
Initializing rx queues on lcore 7 ... rxq=2,0,1
Initializing rx queues on lcore 9 ... rxq=3,0,1
Initializing rx queues on lcore 15 ... rxq=0,1,1
Initializing rx queues on lcore 17 ... rxq=1,1,1
Initializing rx queues on lcore 19 ... rxq=2,1,1
Initializing rx queues on lcore 21 ... rxq=3,1,1
Checking link statusdone
Port0 Link Up. Speed 10000 Mbps -full-duplex
Port1 Link Up. Speed 10000 Mbps -full-duplex
Port2 Link Up. Speed 10000 Mbps -full-duplex
Port3 Link Up. Speed 10000 Mbps -full-duplex
L3FWD: entering main loop on lcore 5
L3FWD:  -- lcoreid=5 portid=1 rxqueueid=0
L3FWD: entering main loop on lcore 7
L3FWD:  -- lcoreid=7 portid=2 rxqueueid=0
L3FWD: entering main loop on lcore 15
L3FWD: entering main loop on lcore 21
L3FWD:  -- lcoreid=21 portid=3 rxqueueid=1
L3FWD: entering main loop on lcore 9
L3FWD:  -- lcoreid=9 portid=3 rxqueueid=0
L3FWD: entering main loop on lcore 19
L3FWD:  -- lcoreid=19 portid=2 rxqueueid=1
L3FWD:  -- lcoreid=15 portid=0 rxqueueid=1
L3FWD: entering main loop on lcore 17
L3FWD:  -- lcoreid=17 portid=1 rxqueueid=1
L3FWD: entering main loop on lcore 3
L3FWD:  -- lcoreid=3 portid=0 rxqueueid=0

LPM: Adding route 0x01010100 / 24 (0) 
LPM: Adding route 0x02010100 / 24 (1)
LPM: Adding route 0x03010100 / 24 (2)
LPM: Adding route 0x04010100 / 24 (3)

使用的LPM最长匹配模式
数据包经端口0发送出去,可以到达1.1.1.0/24网段,目标mac为02:00:00:00:00:00
数据包经端口1发送出去,可以到达2.1.1.0/24网段,目标mac为02:00:00:00:00:01
数据包经端口2发送出去,可以到达3.1.1.0/24网段,目标mac为02:00:00:00:00:02
数据包经端口3发送出去,可以到达4.1.1.0/24网段,目标mac为02:00:00:00:00:03

发包方

root@ubuntu-1:~# cat start-pktgen-dpdk-x710.sh
export DPDK_DIR=/root/dpdk/dpdk-17.11
export DPDK_TARGET=x86_64-native-linuxapp-gcc
export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
mkdir -p /dev/hugepages
mount -t hugetlbfs hugetlbfs /dev/hugepages
modprobe uio
insmod $DPDK_BUILD/kmod/igb_uio.ko
export RTE_SDK=$DPDK_DIR
export RTE_TARGET=$DPDK_TARGET
cd /root/pktgen-3.4.9/
$DPDK_DIR/usertools/dpdk-devbind.py -b igb_uio 0000:05:00.0 0000:05:00.1 0000:05:00.2 0000:05:00.3
./pktgen  -l 0,2-9  -n 4 --proc-type auto --socket-mem 2048,2048  --huge-dir /dev/hugepages  -- -P -T -m '[2:3].0,[4:5].1,[6:7].2,[8:9].3' 

准备发包

root@ubuntu-1:~# sh start-pktgen-dpdk-x710.sh

在这里插入图片描述
发包配置

set 0 src ip 1.1.1.241/24
set 0 src mac 02:00:00:00:00:00
set 0 dst ip  2.1.1.242
set 0 dst mac 68:05:CA:32:03:18
set 1 src ip  2.1.1.242/24
set 1 src mac 02:00:00:00:00:01
set 1 dst ip  1.1.1.241
set 1 dst mac 68:05:CA:32:03:19
set 2 src ip  3.1.1.241/24
set 2 src mac 02:00:00:00:00:02
set 2 dst ip  4.1.1.242
set 2 dst mac 68:05:CA:32:03:1A
set 3 src ip  4.1.1.242/24
set 3 src mac 02:00:00:00:00:03
set 3 dst ip  3.1.1.241
set 3 dst mac 68:05:CA:32:03:1B

#启动端口发包(全双向)

start 0
start 1
start 2
start 3

路由表的设计参见intel

低性能(lcpu=3,5,7,9,15,17,19,21)----10.55Mpps
在这里插入图片描述
高性能(lcpu=2,4,6,8,14,16,18,20)----37.3Mpps
在这里插入图片描述

结论:

l3fwd转发,lcpu的绑定一定要在网卡所在的socket上,否则会导致低性能

用lstopo可以查看那个pci设备和那个scoket直连

apt-get install hwloc
root@ubuntu-1:~# lstopo-no-graphics
Machine (63GB total)
  NUMANode L#0 (P#0 31GB)
    Package L#0 + L3 L#0 (15MB)
      L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
        PU L#0 (P#0)
        PU L#1 (P#12)
      L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
        PU L#2 (P#2)
        PU L#3 (P#14)
      L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
        PU L#4 (P#4)
        PU L#5 (P#16)
      L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
        PU L#6 (P#6)
        PU L#7 (P#18)
      L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4
        PU L#8 (P#8)
        PU L#9 (P#20)
      L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5
        PU L#10 (P#10)
        PU L#11 (P#22)
    HostBridge L#0
      PCIBridge
        PCI 1000:005d
          Block(Disk) L#0 "sda"
          Block(Disk) L#1 "sdb"
      PCIBridge
        PCI 8086:1572
          Net L#2 "enp5s0f0"
        PCI 8086:1572
          Net L#3 "enp5s0f1"
        PCI 8086:1572
          Net L#4 "enp5s0f2"
        PCI 8086:1572
          Net L#5 "enp5s0f3"
      PCIBridge
        PCI 14e4:168a
          Net L#6 "eno1"
        PCI 14e4:168a
          Net L#7 "eno2"
        PCI 14e4:168a
          Net L#8 "eno3"
        PCI 14e4:168a
          Net L#9 "eno4"
      PCIBridge
        PCI 8086:1572
          Net L#10 "enp7s0f0"
        PCI 8086:1572
          Net L#11 "enp7s0f1"
      PCI 8086:8d62
      PCIBridge
        PCIBridge
          PCIBridge
           PCIBridge
              PCI 102b:0534
                GPU L#12 "card0"
                GPU L#13 "controlD64"
      PCI 8086:8d02

  NUMANode L#1 (P#1 31GB)
    Package L#1 + L3 L#1 (15MB)
      L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6
        PU L#12 (P#1)
        PU L#13 (P#13)
      L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7
        PU L#14 (P#3)
        PU L#15 (P#15)
      L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8
        PU L#16 (P#5)
        PU L#17 (P#17)
      L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9
        PU L#18 (P#7)
        PU L#19 (P#19)
      L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10
        PU L#20 (P#9)
        PU L#21 (P#21)
      L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11
        PU L#22 (P#11)
        PU L#23 (P#23)
    HostBridge L#9
      PCIBridge
        PCI 8086:10fb
          Net L#14 "enp132s0f0"
        PCI 8086:10fb
          Net L#15 "enp132s0f1"
        4 x { PCI 8086:10ed }
发布了15 篇原创文章 · 获赞 0 · 访问量 1255

猜你喜欢

转载自blog.csdn.net/qq_45632433/article/details/103042937
710
da