Virtualization Technology—SR-IOV Single Root I/O Virtualization

Table of contents

SR-IOV

Traditional I/O virtualization solutions require VMM to capture and simulate I/O operations of VMs. When VMM needs to handle concurrent access and isolation of multiple VMs to the same I/O peripheral, VMM becomes a performance bottleneck. .

In the Intel VT-d hardware-assisted I/O virtualization solution, the entire PCIe device Passthrough (passthrough) can be used by the VM through the I/O MMU combined with the DMA Remapping Feature and Interrupt Remapping Feature technologies without the participation of the VMM. But the problem is that there are too many VMs, but only a few physical peripherals.

In this context, Intel and other mainstream network card manufacturers have launched a PCIe device virtualization technology standard SR-IOV (Single-Root I/O Virtualization, Single-Root I/O Virtualization, based on Intel VT-d hardware-assisted virtualization technology) root I/O virtualization). It can be seen that SR-IOV is designed for VM.

insert image description here

SR-IOV virtualizes a physical device (PF) into multiple virtual devices (VFs) by adding some hardware components to the physical network card:

  • PF (Physical Function) : As a manager, PF is responsible for managing all physical resources and the life cycle of VFs, and allocates resources for each VF, such as MAC address and Rx/Tx Queues. The operating system or VMM needs to configure and use VFs through PF.

  • VF (Virtual Function, virtual function) : It is a very lightweight "virtual channel", which only includes I/O functions, and supports separate configuration of network attributes such as VLAN tags and QoS policies. Each VF has its own PCIe Configuration Space (configuration space), a PCIe BDF, an MSI-X interrupt, a MAC address and some Rx/Tx Queues. In addition, other physical resources of PF are shared.

The realization of SR-IOV technology requires the support of two core functions:

  1. BAR base address mapping : Map the PCIe BAR of the VF to the PCIe BAR of the PF, so as to realize the access to the hardware resources of the PF through the VF.
  2. Virtual I/O queue : Map the I/O request of VF to the shared queue in PF or the dedicated queue of VF to improve the network I/O performance of the virtual machine.

By default, the VF of an SR-IOV NIC is disabled, and the PF acts as a traditional PCIe device. Once VFs are enabled, PF creates VFs through registers, and accesses the PCIe BAR of each VF through PF BDF. Each VF has a virtual PCIe Configuration Space for mapping its register set. A VF device driver operates on register sets to use its I/O capabilities.

insert image description here

SR-IOV VEB

SR-IOV VEB (Virtual Ethernet Bridge, Virtual Ethernet Switch) is a hardware-implemented virtual switching function that provides high-performance Layer 2 switching and forwarding capabilities.

As shown in the following figure of Intel x710 VEB, VMM manages the configuration of VEB through PF. In the scenario where VEB is enabled, VEB connects PF with all VFs, and performs secondary authentication through VF and PF MAC addresses and VLAN IDs. layer forwarding.

  • Ingress (from the external network to the network card) : If the dstMAC and VLAN ID of the Frame match a certain VF, the Frame will be forwarded to this VF, otherwise the Frame will enter the PF. If dstMAC of Frame is a broadcast address, then Frame will broadcast in the same VLAN.

  • Egress (sent from PF or VF) : If the MAC address of the Frame does not match any port (VF or PF) in the same VLAN, the Frame will be sent from the network card to the external network, otherwise it will be directly forwarded to the corresponding PF or VF internally. For example, if dstMAC of Frame is a broadcast address, then Frame will broadcast in the same VLAN and to the outside of the network card.

insert image description here

It can be seen that another feature of SR-IOV VEB is that the east-west traffic between VF VMs of the same host does not need to pass through the TOR switch, and the SR-IOV NIC forwards it by querying the internal MAC Table (configured statically or dynamically according to the network card model) deal with.

insert image description here

SR-IOV VEPA

SR-IOV VEB has the advantages of high performance, but it also brings some other disadvantages in the actual production environment, such as: lack of visualization of network traffic, lack of ability to implement network control policies, lack of management scalability, etc.

There are two main reasons for these problems:

  1. The east-west traffic between the same host VM does not pass through devices such as TOR integrated with the network traffic collection protocol.
  2. The traffic from the VM to the host does not carry any harvestable identifying information.

Therefore, there are two ways to solve these problems:

  1. Force the VM's traffic to go through the collection point.
  2. It is mandatory that the traffic of the VM must carry the collection identifier.

This is the so-called virtual machine traffic perception technology, and there are currently two main schools:

  1. Cisco and VMware mainly promote VN-Tag technology, the standard is 802.1Qbh BPE (Bridge Port Extension, port expansion equipment): try to provide a complete virtualized network solution from the access layer to the aggregation layer, as far as possible to achieve a software-defined The purpose of controllable network. It expands the traditional network protocol, so it requires new network equipment support, and the cost is relatively high.

  2. HP, Juniper, IBM, Qlogic, and Brocade mainly promote VEPA (Virtual Ethernet Port Aggregator, virtual Ethernet port concentrator), and the standard is 802.1Qbg EVB (Edge Virtual Bridging, edge virtual switch): try to use existing equipment at a lower cost Improve software simulated networking.

Here we mainly discuss SR-IOV VEPA technology, as shown in the following figure Intel x710 VEPA, using VEPA to replace VEB.

insert image description here

The core idea of ​​VEPA is: Forcibly forward the network traffic generated by the VM to the uplink TOR switch for forwarding, even the traffic in the same host. But at the same time, we also know that the traditional switch does not allow the Frame to return from the original path of the Input Port, so a switch that supports the VEPA standard is required to realize it. As shown in the figure below, the message sent from VM1 to VM2 or VM3 is first sent to the external switch. After checking the table, the Frame returns to the server along the original route. This working mode is called hairpin turn forwarding.

insert image description here

SR-IOV Multi-Channel

In the datacom network, if you want to identify the traffic, you must use a specific field to do it. HP uses QinQ (802.1ad), and uses QinQ's S-TAG to identify virtual machine traffic on the basis of standard VEPA, forming an enhanced VEPA, namely: 802.1Qbg Multi-Channel (multi-channel technology).

Multi-channel technology is a solution to enhance the VEPA function by adding IEEE standard packet tags to virtual machine packets. Through the label mechanism, the mixed deployment solution of VEB, Director IO and VEPA can be realized. With the help of multi-channel technology, administrators can flexibly choose the access solution of virtual machines and external networks according to the requirements of network security, performance and manageability. (VEB, Director IO, or VEPA). Multi-channel technology was proposed by HP, and finally accepted by IEEE 802.1 working group as an optional solution of EVB standard.

The multi-channel technical solution divides the switch port or the network card into multiple logical channels, and logically isolates each channel. Each logical channel can be defined as any one of VEB, VEPA or Director IO according to user needs. Each logical channel is treated as an independent channel to the external network. The multi-channel technology borrows the 802.1ad S-TAG (QinQ) standard, and uses an additional S-TAG and VLAN-ID to distinguish different logical channels divided on the network card or switch port.

The multi-channel technology enables the external physical switch to identify which VEAP/VEB or which Director IO network card the network traffic comes from through the S-TAG of the message. vice versa.

As shown in the figure below, multi-channel technology can combine various solutions:

  1. The administrator solves the requirement that VEB and VEPA share an external network (network card) through multi-channel technology, and multiple VEBs or VEPAs share the same physical network card. Administrators can use VEB for specific virtual machines for better switching performance and VEPA for other virtual machines for better network control policy enforceability and traffic visibility.

  2. Directly map a virtual machine to a physical network card (SR-IOV VF Director IO), while other virtual machines still share the physical network card through VEB or VEPA.

insert image description here

SR-IOV OvS

Since SR-IOV can only be configured and used through the CLI in the operating system, and does not have a native SDN control plane, it is necessary to develop additional related components (eg Neutron sriov-agent), which also makes SR-IOV use becomes difficult.

Fortunately, with the rise of SmartNIC and DPU, the technology of combining SDN technology and SR-IOV is becoming more and more mature. For example, in the SmartNIC OVS Fastpath hardware offload solution first proposed by Mellanox, VF is used as a virtual channel connecting VM and OvS Fastpath, which has both programmable flexibility and high performance.

insert image description here

Applications of SR-IOV

Enable SR-IOV VFs

Step 1. Ensure SR-IOV and VT-d are enabled in BIOS.

Step 2. Enable I/O MMU in Linux by adding intel_iommu=on to the kernel parameters, for example, using GRUB.

...
linux16 /boot/vmlinuz-3.10.0-862.11.6.rt56.819.el7.x86_64 root=LABEL=img-rootfs ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet intel_·=on iommu=pt isolcpus=2-3,8-9 nohz=on nohz_full=2-3,8-9 rcu_nocbs=2-3,8-9 intel_pstate=disable nosoftlockup default_hugepagesz=1G hugepagesz=1G hugepages=16 LANG=en_US.UTF-8
...

Step 3. Create the VFs via the PCI SYS interface(e.g. enp129s0f0, enp129s0f1).

$ cat /etc/sysconfig/network-scripts/ifcfg-enp129s0f0
DEVICE="enp129s0f0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"

$ cat /etc/sysconfig/network-scripts/ifcfg-enp129s0f1
DEVICE="enp129s0f1"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"

$ echo 16 > /sys/class/net/enp129s0f0/device/sriov_numvfs
$ echo 16 > /sys/class/net/enp129s0f1/device/sriov_numvfs

Step 4. Verify that the VFs have been created and are in up state.

$ lspci | grep Ethernet
03:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
03:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
81:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
81:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
81:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:11.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:11.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:11.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:11.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:11.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:11.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:11.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:11.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:12.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:12.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:12.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:12.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:12.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:12.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:12.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:12.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:13.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:13.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:13.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:13.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:13.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:13.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:13.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
81:13.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)

$ ip link show enp129s0f0
4: enp129s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 90:e2:ba:34:fb:32 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC be:40:5c:21:98:31, spoof checking on, link-state auto, trust off, query_rss off
    vf 1 MAC b2:e9:21:c7:4a:e0, spoof checking on, link-state auto, trust off, query_rss off
    vf 2 MAC ae:75:99:e3:dc:1d, spoof checking on, link-state auto, trust off, query_rss off
    vf 3 MAC 66:73:75:d7:15:a8, spoof checking on, link-state auto, trust off, query_rss off
    vf 4 MAC b6:04:f7:ed:ad:36, spoof checking on, link-state auto, trust off, query_rss off
    vf 5 MAC a2:ad:62:61:2a:bd, spoof checking on, link-state auto, trust off, query_rss off
    vf 6 MAC 1a:be:5b:ab:b9:fd, spoof checking on, link-state auto, trust off, query_rss off
    vf 7 MAC 3a:63:44:d9:8f:44, spoof checking on, link-state auto, trust off, query_rss off
    vf 8 MAC 7e:fe:c7:f6:9d:5d, spoof checking on, link-state auto, trust off, query_rss off
    vf 9 MAC 4a:e9:57:84:50:29, spoof checking on, link-state auto, trust off, query_rss off
    vf 10 MAC 0a:a7:e7:ff:ee:c8, spoof checking on, link-state auto, trust off, query_rss off
    vf 11 MAC 02:58:45:61:15:a7, spoof checking on, link-state auto, trust off, query_rss off
    vf 12 MAC 2a:75:77:ff:c1:6d, spoof checking on, link-state auto, trust off, query_rss off
    vf 13 MAC be:99:4d:22:5a:87, spoof checking on, link-state auto, trust off, query_rss off
    vf 14 MAC 52:44:5f:d7:fb:e3, spoof checking on, link-state auto, trust off, query_rss off
    vf 15 MAC b2:16:c3:a2:5f:c7, spoof checking on, link-state auto, trust off, query_rss off

$ ip link show enp129s0f1
5: enp129s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 90:e2:ba:34:fb:33 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 86:64:1f:09:bb:d5, spoof checking on, link-state auto, trust off, query_rss off
    vf 1 MAC a2:0f:71:30:31:31, spoof checking on, link-state auto, trust off, query_rss off
    vf 2 MAC 0e:b1:06:54:3f:75, spoof checking on, link-state auto, trust off, query_rss off
    vf 3 MAC ca:35:be:e2:ea:70, spoof checking on, link-state auto, trust off, query_rss off
    vf 4 MAC 26:35:04:86:42:50, spoof checking on, link-state auto, trust off, query_rss off
    vf 5 MAC e2:fe:00:1a:74:f7, spoof checking on, link-state auto, trust off, query_rss off
    vf 6 MAC 6a:ef:8b:61:a6:c0, spoof checking on, link-state auto, trust off, query_rss off
    vf 7 MAC fa:61:e2:f9:a1:2d, spoof checking on, link-state auto, trust off, query_rss off
    vf 8 MAC 16:8c:47:34:61:03, spoof checking on, link-state auto, trust off, query_rss off
    vf 9 MAC f6:85:2d:85:8e:a3, spoof checking on, link-state auto, trust off, query_rss off
    vf 10 MAC 0e:4b:d8:0a:9a:7f, spoof checking on, link-state auto, trust off, query_rss off
    vf 11 MAC f2:27:a6:ee:da:be, spoof checking on, link-state auto, trust off, query_rss off
    vf 12 MAC 82:37:55:7f:cd:19, spoof checking on, link-state auto, trust off, query_rss off
    vf 13 MAC 2e:30:e1:3b:c1:a1, spoof checking on, link-state auto, trust off, query_rss off
    vf 14 MAC 4e:56:c7:3f:e5:77, spoof checking on, link-state auto, trust off, query_rss off
    vf 15 MAC 56:21:25:bd:ac:18, spoof checking on, link-state auto, trust off, query_rss off

NOTE: If the interfaces are down, set them to up before launching a guest, otherwise the instance will fail to spawn.

ip link set <interface> up
# e.g.
enp129s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP>

Step 5. Persist created VFs on reboot

echo "echo '7' > /sys/class/net/eth3/device/sriov_numvfs" >> /etc/rc.local

Hang in VF to KVM virtual machine

insert image description here

Use the command -device vfio-pci,host to passthrough the specified VF to the KVM virtual machine.

qemu-system-x86_64 -enable-kvm -drive file=<vm img>,if=virtio -cpu host -smp 16 -m 16G \
  -name <vm name> -device vfio-pci,host=<vf pci bus addr> -device vfio-pci,host=<vf pci bus addr> -vnc :1 -net none

You can also mount a VF by manually mounting the PCI device to the VM:

  1. View the list of PCI devices:
[root@overcloud-compute-0 ~]# virsh nodedev-list | grep pci
pci_0000_00_00_0
pci_0000_00_01_0
pci_0000_00_01_1
...
  1. Check the details of the specified PCI device and pay attention to the BDF information of PCI, for example: <address domain='0x0000' bus='0x81' slot='0x10' function='0x2'/>.
$ virsh nodedev-dumpxml pci_0000_81_10_2
<device>
  <name>pci_0000_81_10_2</name>
  <path>/sys/devices/pci0000:80/0000:80:03.0/0000:81:10.2</path>
  <parent>pci_0000_80_03_0</parent>
  <driver>
    <name>ixgbevf</name>
  </driver>
  <capability type='pci'>
    <domain>0</domain>
    <bus>129</bus>
    <slot>16</slot>
    <function>2</function>
    <product id='0x10ed'>82599 Ethernet Controller Virtual Function</product>
    <vendor id='0x8086'>Intel Corporation</vendor>
    <capability type='phys_function'>
      <address domain='0x0000' bus='0x81' slot='0x00' function='0x0'/>
    </capability>
    <iommuGroup number='46'>
      <address domain='0x0000' bus='0x81' slot='0x10' function='0x2'/>
    </iommuGroup>
    <numa node='1'/>
    <pci-express>
      <link validity='cap' port='0' width='0'/>
      <link validity='sta' width='0'/>
    </pci-express>
  </capability>
</device>
  1. Shut down the guest.

  2. Write a new-dev XML file based on the above device information

$ cat /tmp/new-device.xml

<interface type='hostdev' managed='yes'>
   <source>
     <address type='pci' domain='0x0000' bus='0x81' slot='0x10' function='0x2' />
   </source>
</interface>
  1. Move VF to VM.
$ virsh attach-device VM1 /tmp/new-device.xml --live --config
Device attached successfully
  1. Check out the XML updates for the VM.
$ virsh dumpxml vm1
 ...
 <devices>
   ...
   <interface type='hostdev' managed='yes'>
      <mac address='52:54:00:f0:d3:b8'/>
      <driver name='kvm'/>
      <source>
        <address type='pci' domain='0x0000' bus='0x81' slot='0x10' function='0x2' />
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </interface>
    ...
  </devices>
  1. Start the virtual machine
virsh start MyGuest
  1. Enter GuestOS to view network card information
$ ip addr show eth4
4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 2c:53:4a:02:20:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.169/24 brd 192.168.99.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe3b:6128/64 scope link 
       valid_lft forever preferred_lft forever
  1. Unmount the VM's PCI device.
$ virsh nodedev-dettach pci_0000_06_10_0
Device pci_0000_06_10_0 detached

NUMA affinity for SR-IOV

The DMA of PCIe devices has NUMA affinity characteristics, and SR-IOV is no exception. In order to make the VF Passthrough VM have better performance, the vCPU and VF of the VM are usually run in the same NUMA Node.

You can check the NUMA affinity of SR-IOV with the following command.

# SR-IOV 网卡 enp129s0f0 属于 NUMA Node 1
$ cat /sys/class/net/enp129s0f0/device/numa_node
1

Check the vCPU binding of VM1:

$ virsh vcpupin VM1_uuid

VF's network configuration

Support to configure MAC address, VLAN Tag, promiscuous mode for VF:

$ ip l |grep 5e:9c
    vf 14 MAC fa:16:3e:90:5e:9c, vlan 19, spoof checking on, link-state auto, trust on, query_rss off

The VLAN ID of the VF device is also recorded in the XML file of the KVM virtual machine, as follows:

<interface type='hostdev' managed='yes'>                                                         
  <mac address=' fa:aa:aa:aa:aa:aa '/>                                                              
  <driver name='kvm'/>                                                                           
  <source>                                                                                       
    <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x7'/>                  
  </source>                                                                                      
  <vlan>                                                                                          
    <tag id='190'/>                                                                              
  </vlan>
  <alias name='hostdev0'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>

VFs Bonding

In physical servers, we often bond two Ethernet Ports to ensure high availability and bandwidth expansion of physical links. In the SR-IOV Passthrough scenario, one VF on each of the two PFs of an SR-IOV NIC can also be bonded to the VM. The premise is that the MAC addresses of the two VFs must be configured to be the same.

$ /etc/sysconfig/network-scripts/ifcfg-bond0
BONDING_MASTER=yes
BOOTPROTO=none
DEFROUTE=yes
DEVICE=bond0
NAME=bond0
ONBOOT=yes
TYPE=Bond

$ /etc/sysconfig/network-scripts/ifcfg-ens4
DEVICE=ens4
MASTER=bond0
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet

$ /etc/sysconfig/network-scripts/ifcfg-ens5
DEVICE=ens5
MASTER=bond0
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet

Live Migration Issues for SR-IOV Virtual Machines

Passthrough is the basic usage of SR-IOV devices, enabling multiple VMs to achieve performance comparable to that on bare metal hosts.

It is worth noting that the portability of VMs after SR-IOV Passthrough will be limited, because the implementation of SR-IOV relies on Intel VT-d technology. When VF completes initialization in GuestOS, it establishes the address mapping table between GVA and HPA, and this "stateful" mapping table will be lost during live migration. When migrating, VF must be remounted (re-establish address mapping table).

insert image description here

A practical phenomenon is that after the VF Passthrough VM is migrated, it is necessary to log in to the GuestOS to manually execute the ifup command, and then obtain the IP address. During this process, network traffic must be disconnected.

A workaround idea is to use a Port that supports live migration to undertake the traffic generated by the SR-IOV Port during the migration process. The specific operations are as follows:

  1. Add a normal (OvS) or indirect mode SR-IOV (macvtap SR-IOV) port to the SR-IOV virtual machine.
  2. In GuestOS, make a bond between the original SR-IOV Port and the port added in Step1.
  3. Perform live migration of SR-IOV virtual machines.
  4. After the migration is complete, log in to GuestOS and ifup <vnic>execute pull up the original SR-IOV Port.
  5. Delete the Port added in Step1.

insert image description here

Guess you like

Origin blog.csdn.net/Jmilk/article/details/130253170