dpdk环境搭建及运行helloworld测试

使用纯净环境搭建dpdk测试环境:

​ 使用vmware虚拟机安装dpdk环境进行测试。

​ 虚拟机用16.04ubuntu

​ dpdk用dpdk-19.08.2.tar(官网下载)

1:安装dpdk环境前准备。

1:新的虚拟机环境,换源,更新,安装gcc,g++,确保python安装

2:增加网络适配器,增加处理器,可能要增大内存,设置多网卡。

​ 这里我

​ 第一个网络适配器桥接模式,作为dpdk多网卡测试环境。

​ 第二个网络适配器net模式,作为连接xshell方便操作命令。

3: 重启查看当下网络状况,增加配置使多网卡ip都能生效(参考第四步,在/etc/network/interfaces中增加)。

4: 在虚拟机关机状态下,修改虚拟机对应的配置文件,使其支持多网卡。

​ 如果发现配置文件中没有对应字段,直接增加就可以。 一般是把对应字段由e1000 改为vmxnet3

ethernet0.virtualDev = "vmxnet3"
ethernet0.wakeOnPcktRcv = "TRUE"

重启后,查看,这里可能发现设置后,网卡名称发生变化,按照第二步,重新修改配置生效就好:

hlp@ubuntu:~$ ifconfig
ens38     Link encap:Ethernet  HWaddr 00:0c:29:1d:08:5f  
          inet addr:192.168.11.158  Bcast:192.168.11.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe1d:85f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5752 errors:35 dropped:35 overruns:0 frame:0
          TX packets:2164 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:8571580 (8.5 MB)  TX bytes:125904 (125.9 KB)
          Interrupt:16 Base address:0x2000 

ens160    Link encap:Ethernet  HWaddr 00:0c:29:1d:08:55  
          inet addr:192.168.50.62  Bcast:192.168.50.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe1d:855/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:657 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:54652 (54.6 KB)  TX bytes:1554 (1.5 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:160 errors:0 dropped:0 overruns:0 frame:0
          TX packets:160 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:11840 (11.8 KB)  TX bytes:11840 (11.8 KB)
hlp@ubuntu:~$ cat /etc/network/interfaces
# 这里省略前面的固定内容
# 原本是ens33  发现修改虚拟机配置文件vmx后,网卡变化,对应也修改 
auto ens160
iface ens160 inet dhcp

auto ens38
iface ens38 inet dhcp

查看支持多网卡状况:(这里我设置了8核处理器,所以这个8个队列(0-7))

hlp@ubuntu:~$ cat /proc/interrupts |grep ens
 16:          3   0    1169          0         12          4          0          0   IO-APIC  16-fasteoi   vmwgfx, snd_ens1371, ens38
 57:          3   766  0          0          0          0          0          0   PCI-MSI 1572864-edge      ens160-rxtx-0
 58:          0   1    0          0          0          0          0          0   PCI-MSI 1572865-edge      ens160-rxtx-1
 59:          0   0    0          0          0          0          0          0   PCI-MSI 1572866-edge      ens160-rxtx-2
 60:          0   0    0          0          0          4          0          0   PCI-MSI 1572867-edge      ens160-rxtx-3
 61:          0   0    0          0          1          3          0          0   PCI-MSI 1572868-edge      ens160-rxtx-4
 62:          0   0    0          0          0          0          0          0   PCI-MSI 1572869-edge      ens160-rxtx-5
 63:          0   0    0          0          0          0          0          0   PCI-MSI 1572870-edge      ens160-rxtx-6
 64:          0   0    0          0          0          1          0          0   PCI-MSI 1572871-edge      ens160-rxtx-7
 65:          0   0    0          0          0          0          0          0   PCI-MSI 1572872-edge      ens160-event-8

5:用nginx测试一下多队列网卡:

安装nginx,并进行启动测试: 参考上一篇文章。(多个网卡都可以启动测试成功)

配置nginx亲和性,根据终端号,配置和cpu的绑定,并进行测试。

#这次我定义的8核,所以在nginx的配置文件中设置8个进程以及cpu亲缘性如下:
worker_processes  8;
worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;
#从/proc/interrupts 文件中可以看到,多队列网卡0-7,对应的中断号是 57-64 
#设置对应中断号与cpu的绑定, 直接查看,发现cat /proc/irq/64/smp_affinity_list 每个对应的cpu是均匀的,可以直接测试一下
#使用wrk性能测试工具进行测试    ./wrk -c400 -d60s -t100 http://192.168.50.58/

#对比中断号和cpu的接收,可以发现,下面中断号对应cpu和我们查看interrupts中中断效果大体一致 
#smp_affinity_list 中cpu的绑定,可以通过修改smp_affinity文件实现,用echo写入,可以参考上一篇文章
hlp@ubuntu:~$ cat /proc/interrupts |grep ens
	 CPU0 CPU1    CPU2       CPU3     CPU4   CPU5       CPU6       CPU7
 16:    3   0    10450         17      169     10          0         97   IO-APIC  16-fasteoi   vmwgfx, snd_ens1371, ens38
 57:    3   256600   0          0        0      0          0          0   PCI-MSI 1572864-edge      ens160-rxtx-0
 58:    0   1        0          0        0      0          0      57505   PCI-MSI 1572865-edge      ens160-rxtx-1
 59:    0   1        0          0        0      0      56837          0   PCI-MSI 1572866-edge      ens160-rxtx-2
 60:    0   0        0      20167        0  34379          0          0   PCI-MSI 1572867-edge      ens160-rxtx-3
 61:    0   0        0          0    49059      3          0          0   PCI-MSI 1572868-edge      ens160-rxtx-4
 62:    0   1        0      53576        0      0          0          0   PCI-MSI 1572869-edge      ens160-rxtx-5
 63:    0   1    53047          0        0      0          0          0   PCI-MSI 1572870-edge      ens160-rxtx-6
 64:    0   4047     0          0        0  53545          0          0   PCI-MSI 1572871-edge      ens160-rxtx-7
 65:    0   0        0          0        0     0           0          0   PCI-MSI 1572872-edge      ens160-event-8
root@ubuntu:/usr/local/nginx# cat /proc/irq/56/smp_affinity_list 
5
root@ubuntu:/usr/local/nginx# cat /proc/irq/57/smp_affinity_list 
1
root@ubuntu:/usr/local/nginx# cat /proc/irq/58/smp_affinity_list 
7
root@ubuntu:/usr/local/nginx# cat /proc/irq/59/smp_affinity_list 
6
root@ubuntu:/usr/local/nginx# cat /proc/irq/60/smp_affinity_list 
3
root@ubuntu:/usr/local/nginx# cat /proc/irq/61/smp_affinity_list 
4
root@ubuntu:/usr/local/nginx# cat /proc/irq/62/smp_affinity_list 
3
root@ubuntu:/usr/local/nginx# cat /proc/irq/63/smp_affinity_list 
2
root@ubuntu:/usr/local/nginx# cat /proc/irq/64/smp_affinity_list 
5

如果我增加大内存页和隔离cpu,试一下,这里没有手动设置这个中断与cpu核的绑定,可以参考上一篇文章

#这里是虚拟机  在/etc/default/grub中对应字段增加:
	default_hugepages=1G hugepagesz=2M hugepages=1024 isolcpus=0-2
#执行update-grub重启生效
#重启后查看多队列网卡绑定情况:
root@ubuntu:/usr/local/nginx# cat /proc/irq/56/smp_affinity_list 
7
root@ubuntu:/usr/local/nginx# cat /proc/irq/57/smp_affinity_list 
6
root@ubuntu:/usr/local/nginx# cat /proc/irq/58/smp_affinity_list 
5
root@ubuntu:/usr/local/nginx# cat /proc/irq/59/smp_affinity_list 
4
root@ubuntu:/usr/local/nginx# cat /proc/irq/60/smp_affinity_list 
3
root@ubuntu:/usr/local/nginx# cat /proc/irq/61/smp_affinity_list    #这个好像会变,测试前是6,测试后是3
3
root@ubuntu:/usr/local/nginx# cat /proc/irq/62/smp_affinity_list 
6
root@ubuntu:/usr/local/nginx# cat /proc/irq/63/smp_affinity_list 
5
hlp@ubuntu:~$ tail -f /proc/interrupts |grep ens
tail: /proc/interrupts: file truncated
 16:          8       0    0   13     0    39    358          2   IO-APIC  16-fasteoi   vmwgfx, snd_ens1371, ens38
 56:         91       0    0    0     0     0      2       1468   PCI-MSI 1572864-edge      ens160-rxtx-0
 57:          0       0    0    0     0     0      0          0   PCI-MSI 1572865-edge      ens160-rxtx-1
 58:          0       0    0    0     0     0      0          0   PCI-MSI 1572866-edge      ens160-rxtx-2
 59:          0       0    0    0     0     0      0          0   PCI-MSI 1572867-edge      ens160-rxtx-3
 60:          0       0    0    0     0     0      1          0   PCI-MSI 1572868-edge      ens160-rxtx-4
 61:          0       0    0    0     0     0      2          3   PCI-MSI 1572869-edge      ens160-rxtx-5
 62:          1       0    0    0     0     1      4          2   PCI-MSI 1572870-edge      ens160-rxtx-6
 63:          0       0    0    0     0     0      0          0   PCI-MSI 1572871-edge      ens160-rxtx-7
 64:          0       0    0    0     0     0      0          0   PCI-MSI 1572872-edge      ens160-event-8
#启动nginx并进行测试:
hlp@ubuntu:~$ cat /proc/interrupts |grep ens
		CPU0  CPU1 CPU2    CPU3   CPU4       CPU5       CPU6       CPU7
 16:      8     0    0      13     304         39       1369          2   IO-APIC  16-fasteoi   vmwgfx, snd_ens1371, ens38
 56:     91     0    0       0       0          0          2      93813   PCI-MSI 1572864-edge      ens160-rxtx-0
 57:      0     0    0       0       0          0      35083          1   PCI-MSI 1572865-edge      ens160-rxtx-1
 58:      0     0    0       0       0      32895          0          1   PCI-MSI 1572866-edge      ens160-rxtx-2
 59:      0     0    0       0   38237          0          0          1   PCI-MSI 1572867-edge      ens160-rxtx-3
 60:      0     0    0   34352       0          0          1          1   PCI-MSI 1572868-edge      ens160-rxtx-4
 61:      0     0    0   25975       0          0          2       3541   PCI-MSI 1572869-edge      ens160-rxtx-5
 62:      1     0    0       0       0          1      33745          2   PCI-MSI 1572870-edge      ens160-rxtx-6
 63:      0     0    0       0       0      35065          0          1   PCI-MSI 1572871-edge      ens160-rxtx-7
 64:      0     0    0       0       0          0          0          0   PCI-MSI 1572872-edge      ens160-event-8

2:(失败)编译dpdk(环境用成了镜像32位的,最后一个报错没解决)

1:切换到root权限执行吧。

2:dpdk的编译主要依赖脚本./usertools/dpdk-setup.sh,执行该脚本,可以根据描述依次来实现编译。

如果用39,报错,环境问题,没注意到我的虚拟机环境安装的32位的,镜像选错了,查找对应的32位版本,是27,试一下。

#报错/usr/include/features.h:367:25: fatal error: sys/cdefs.h: No such file or directory

#查看正常环境的文件
root@ubuntu:/# find -name cdefs.h
./home/ubuntu/0407_AT/openwrt-sdk-ramips-mt7621_gcc-7.4.0_musl.Linux-x86_64/staging_dir/toolchain-mipsel_24kc_gcc-7.4.0_musl/include/sys/cdefs.h
./usr/include/x86_64-linux-gnu/sys/cdefs.h
#我的环境上的文件
hlp@ubuntu:/$ sudo find -name cdefs.h
./usr/include/i386-linux-gnu/sys/cdefs.h
#才注意到我的环境安装的是32位的。。。

这里发现有32的版本,试一下,执行27

#报错 eal_memory.c:32:18: fatal error: numa.h: No such file or directory
 sudo apt-get install libnuma-dev
 #执行27 编译i686的包,编译成功但是会报 Installation cannot run with T defined and DESTDIR undefined
 #这里只是编译,不影响

设置环境变量:

#注意观察编译过程 其实会提示的  同时 注意后面的内容是自己环境对应的安装目录和编译环境
root@ubuntu:/home/hlp/dpdk/dpdk-stable-19.08.2# export RTE_SDK=/home/hlp/dpdk/dpdk-stable-19.08.2
root@ubuntu:/home/hlp/dpdk/dpdk-stable-19.08.2# export RTE_TARGET=i686-native-linux-gcc

执行testpmd

#执行43  插入IGB_UIO模块,选择网卡“vmxnet3”会加载此模块
#执行44  插入VFIO模块,选择网卡"e1000"会加载这个模块
#执行49  插入igb_uio模块,绑定网卡,注意绑定的网卡id
Option: 49
Network devices using kernel driver
===================================
0000:02:06.0 '79c970 [PCnet32 LANCE] 2000' if=ens38 drv=pcnet32 unused=igb_uio,vfio-pci *Active*
0000:03:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens160 drv=vmxnet3 unused=igb_uio,vfio-pci *Active*

No 'Baseband' devices detected
==============================

No 'Crypto' devices detected
============================

No 'Eventdev' devices detected
==============================

No 'Mempool' devices detected
=============================

No 'Compress' devices detected
==============================

No 'Misc (rawdev)' devices detected
===================================
#注意上面会打印网卡信息,输入要绑定的网卡编号 这里是0000:03:00.0
Enter PCI address of device to bind to IGB UIO driver: 0000:03:00.0

#会提示警告  说网卡正在运行,可以关闭对应要配置的网卡,重新执行
sudo ifconfig ens160 down
#执行53  选择7,
#会报错error allocating rte services array 
bitmask: 7
Launching app
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
EAL: Error - exiting with code: 1
  Cause: Cannot init EAL: Exec format error
#可能与大内存页配置有关   可以查看当前的大内存页
root@ubuntu:/home/hlp/dpdk/dpdk-stable-19.08.2# grep Huge /proc/meminfo 
AnonHugePages:         0 kB
HugePages_Total:    1024
HugePages_Free:     1024
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

最终,我还是没有测试成功,卡死在运行测试程序时,报错

hlp@ubuntu:~/dpdk/dpdk-stable-19.08.2/i686-native-linux-gcc$ sudo ./app/testpmd i
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
EAL: Error - exiting with code: 1
  Cause: Cannot init EAL: Exec format error

2:(成功)使用amd64虚拟机进行测试

环境用的ubuntu16.04 +dpdk 19.08.2

1:安装环境,配置多网卡,安装gcc,python。

2:配置大内存页(如果启动不起来,虚拟机内存设大一点),安装dpdk

​ 安装libnuma-dev(numa.h需要)

​ 执行对应的环境安装 这里是39, x86_64-native-linux-gcc

​ 配置环境变量: export RTE_SDK=/home/hlp/dpdk/dpdk-stable-19.08.2

​ export RTE_TARGET=x86_64-native-linux-gcc

#执行43 插入IGB_UIO模块,选择网卡“vmxnet3”会加载此模块

#执行44 插入VFIO模块,选择网卡"e1000"会加载这个模块

#执行49 插入igb_uio模块,绑定网卡,注意绑定的网卡id

#执行53 执行testpmd

Option: 53


  Enter hex bitmask of cores to execute testpmd app on
  Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 7
Launching app
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
testpmd: No probed ethernet devices
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done

折腾折腾还专门把网卡改为eth系列,以为是网卡驱动的问题,原来是49没有绑定网卡成功导致

最后发现,执行成功

Option: 53


  Enter hex bitmask of cores to execute testpmd app on
  Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 7
Launching app
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 00:0C:29:71:A2:F2
Checking link statuses...
Done
testpmd> show port info 0

********************* Infos for port 0  *********************
MAC address: 00:0C:29:71:A2:F2
Device name: 0000:03:00.0
Driver name: net_vmxnet3
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off 
  filter off 
  qinq(extend) off 
Supported RSS offload flow types:
  ipv4
  ipv4-tcp
  ipv6
  ipv6-tcp
Minimum size of RX buffer: 1646
Maximum configurable length of RX packet: 16384
Current number of RX queues: 1
Max possible RX queues: 16
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 128
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 8
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 512
TXDs number alignment: 1
Max segment number per packet: 255
Max segment number per MTU/TSO: 16

3:运行dpdk测试demo

运行dpdk环境下的helloworld测试:

root@ubuntu:/home/hlp/dpdk/dpdk-stable-19.08.2/examples/helloworld# make
/bin/sh: 1: pkg-config: not found
/bin/sh: 1: pkg-config: not found
  CC main.o
  LD helloworld
  INSTALL-APP helloworld
  INSTALL-MAP helloworld.map
root@ubuntu:/home/hlp/dpdk/dpdk-stable-19.08.2/examples/helloworld# ./build/helloworld -l 0-3 -n 4
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
hello from core 1
hello from core 2
hello from core 3
hello from core 0

改成8个核心,测试一下:

注意配置环境变量。重新执行49的绑定(注意执行成功,需要关闭要绑定的网卡),53测试testpmd成功。

helloworld测试:

root@ubuntu:/home/hlp/dpdk/dpdk-stable-19.08.2/examples/helloworld# ./build/helloworld -l 0-7 -n 8
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
hello from core 1
hello from core 2
hello from core 3
hello from core 4
hello from core 5
hello from core 6
hello from core 7
hello from core 0

l3fwd测试和kni测试,待补充:

4:回顾一下注意事项:

1:处理器配置成多个,多队列网卡的队列数就是多少个,配置多网卡,方便调试(用的虚拟机)。

2:要修改虚拟机vmx配置文件,使其支持多队列网卡,在/proc/interrupts中查看对应多队列网卡是否生效。

3:配置大内存页后,有时候虚拟机会启动不起来,需要增大虚拟机内存。

4:修改grub文件后一定要执行 update-grub 并重启生效

​ 多个网卡的生效在 /etc/network/interfaces中配置,并重启网络服务生效。

​ 配置修改成传统的网卡名/配置大内存页/隔离cpu 在**/etc/default/grub**中配置

5:编译dpdk时,要注意自己的环境,选择对应的环境编译

​ 我装虚拟机没注意,32位的,编译64位的dpdk报头文件找不到才发现

6:dpdk编译时不要忘记配置环境变量

7:dpdk编译时执行49,要注意执行成功,这里要关闭对应绑定的网卡(ifconfig xxx down)

8:dpdk编译时报错,numa.h找不到,安装libnuma-dev

9:dpdk编译时执行53 testpmd测试时,如果执行不成功(testpmd: No probed ethernet devices),是因为执行49网卡绑定不成功导致的

おすすめ

転載: blog.csdn.net/yun6853992/article/details/121639253