DPDK virtio-user

PS: Welcome everyone to pay attention to my public account: aCloudDeveloper, focus on technology sharing, and strive to build a dry goods sharing platform. The QR code can be scanned at the end of the article. Thank you.

virtio-user is a solution proposed by DPDK for specific scenarios. It is mainly used in two scenarios. One is for DPDK application container to support virtio, which is supported by DPDK v16.07; It is used to communicate with the kernel, which was introduced in DPDK v17.02.

virtio_user for container networking

We know that for virtual machines, there is a standard para-virtualization protocol called virtio to guide the communication between the virtual machine and the host machine, but for the container environment, it is not possible to directly use virtio, because the virtual machine comes through Qemu. In simulation, Qemu will share the information of the entire KVM virtual machine it virtualizes to the host, but it is obviously unreasonable for a DPDK-accelerated containerized environment. Because the communication between the DPDK container and the host only needs to get the huge page memory part in the virtual memory, the others are not used, and all sharing is meaningless. DPDK mainly sends and receives data packets based on the huge page memory.

Therefore, virtio_user is actually formed by a small amount of modifications on the basis of virtio PMD. In short, it is to add part of the logic of large page sharing and simplify the logic of the entire shared memory part.

If you are interested, you can compare the code in /driver/net/virtio with the DPDK virtio_user code. In fact, most of them are the same.

From the perspective of DPDK, virtio_user is loaded as a virtual device (vdev), which acts as a virtio front-end driver, and the corresponding back-end communication driver is a user-mode vhost_user. When using it, we only You need to define the corresponding adaptation interface, as follows:

vhost and vhost_user essentially use the IPC method of shared memory, by creating a vhost_user shared memory file on the host side, and then specifying the file when virtio_user starts, such as:

1)首先创建 vhost_user 共享内存文件
--vdev 'eth_vhost_user0,iface=/tmp/vhost_user0'
2)启动 virtio_user 指定文件路径
--vdev=virtio_user0,path=/tmp/vhost_user0

virtio_user is used as exception path to communicate with the kernel

One use of virtio_user is as an exception path for communicating with the kernel. We know that DPDK is a subcontracting scheme that bypasses the kernel, which is the reason for its high performance, but sometimes the packets (such as control packets) received from DPDK need to be dropped into the kernel network protocol stack for further processing. This Paths are called exception paths in DPDK.

Before this, there are several exception path schemes, such as traditional Tun/Tap, KNI (Kernel NIC Interface), AF_Packet and SR-IOV-based Flow Bifurcation. These solutions will not be introduced too much. If you are interested, you can see the DPDK official website, which is described above.

Just like the container network solution uses vhost_user as the backend driver, to make virtio_user communicate with the kernel, just load the kernel module vhost.ko and let it act as the backend communication driver of virtio_user.

Therefore, we can see that these two schemes are essentially the same, but the back-end driver is changed. This is also the advantage of virtio. To define a set of general interface standards, you only need to load the type of communication method required. The corresponding driver can be used, the changes are very few, and the scalability is very high.

PS: Welcome everyone to pay attention to my public account: aCloudDeveloper, focus on technology sharing, and strive to build a dry goods sharing platform. At present, many dry goods have been accumulated.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325251490&siteId=291194637