OpenStack PCI passthrough environment configuration

OpenStack PCI passthrough environment configuration

Author: Zhang Hangdong

OpenStack version: Kilo

This article is mainly used for personal study and summary. You are welcome to like it, but please be sure to indicate the author and source, thank you!


Virtual machines use transparent devices to achieve near-native device performance. Both Intel and AMD offer support for device passthrough (and new instructions to assist the hypervisor) in their next-generation processor architectures. Intel calls this support  Virtualization Technology for Directed I/O  (VT-d), while AMD calls it the  I/O Memory Management Unit  (IOMMU). In either case, the latest CPUs provide a way to map PCI physical addresses to guest virtual systems. When this mapping occurs, the hardware is responsible for accessing (and protecting) the device, and the guest operating system uses the device as if it were not a virtual system. In addition to mapping clients to physical memory, the new architecture also provides isolation mechanisms to prevent other clients (or hypervisors) from accessing that memory in advance.

 

1. Confirm whether the Host supports pci-passthrough

Since hardware support is required, to confirm in advance whether the CPU and motherboard support the hardware-assisted virtualization function of Intel or AMD, please refer to the official hardware support list or check the relevant options in the BIOS.

In addition, the so-called clever woman is hard to cook without rice, and CPU support is only a necessary condition for pci-passthrough technology.

A network card that supports pci-passthrough is required.

The following takes Intel E5-2690 + Intel X540 10G NIC + RHEL7.0 as an example.

 

2. Check whether the Host has enabled the hardware-assisted virtualization function

[root@nova2 ~]# cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.10.0-229.el7.x86_64

root=/dev/mapper/rhel-root

ro

rd.lvm.lv=rhel/root

crashkernel=auto

rd.lvm.lv=rhel/swap

vconsole.font=latarcyrheb-sun16

vconsole.keymap=us

rhgb

quiet

LANG = en_US.UTF-8

intel_iommu=on #There     is this field to indicate that the Intel hardware-assisted virtualization function is turned on

default_hugepagesz=1G hugepagesz=1G hugepages=20

 

If it is confirmed that the hardware supports the hardware-assisted virtualization function but it is not enabled, you can configure it as follows:

[root@nova2 ~] # vi  /boot/grub2/grub.cfg #Different     OS or versions may have different files

#Add the following to the startup parameters in it

intel_iommu=on

Then reboot to make the configuration take effect

 

3. Confirm NIC information

[root@nova2 ~]# lspci -nn | grep Ethernet

01:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)

01:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)

82:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01)

82:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01)

84:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01)

84:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01)

 

Here is an example of Intel X540 10G NIC:

1.84:00.1  pci bus address

2.[8086   vendor id

3.1528]   product id

(Remember these three pieces of information, which will be used later)

 

4. Confirm the pci device driver information and unbind it from the host default driver for transparent transmission of the virtual machine

[root@nova2 ~]# virsh nodedev-list | grep pci | grep 84 #84 is from ① above

pci_0000_84_00_0

pci_0000_84_00_1

#Because this network card has two physical ports, grep comes out with two results. Below we only use pci_0000_84_00_1 to demonstrate

 

#Then confirm the relevant information of pci_0000_84_00_1

[root@nova2 ~]# virsh nodedev-dumpxml pci_0000_84_00_1

<device>

  <name>pci_0000_84_00_1</name>

  <path>/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.1</path>

  <parent>pci_0000_80_03_0</parent>

  <driver>

    <name> ixgbe </name> #host default driver

  </driver>

  <capability type='pci'>

    <domain>0</domain>

    <bus>132</bus>

    <slot>0</slot>

    <function>1</function>

    <product id='0x1528'>Ethernet Controller 10-Gigabit X540-AT2</product>

    <vendor id='0x8086'>Intel Corporation</vendor>

    <iommuGroup number='34'>

      <address domain='0x0000' bus='0x84' slot='0x00' function='0x1'/>

    </iommuGroup>

  </capability>

</device>

 

#Unbind pci_0000_84_00_1 from the host's default driver ixgbe

[root@nova2 ~]# virsh nodedev-detach pci_0000_84_00_1

Device pci_0000_84_00_1 detached

 

#Confirm the unbound pci_0000_84_00_1 information again

[root@nova2 ~]# virsh nodedev-dumpxml pci_0000_84_00_1

<device>

  <name>pci_0000_84_00_1</name>

  <path>/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.1</path>

  <parent>pci_0000_80_03_0</parent>

  <driver>

    <name> vfio-pci </name> #The driver becomes vfio (the current default virtualization driver)

  </driver>

  <capability type='pci'>

    <domain>0</domain>

    <bus>132</bus>

    <slot>0</slot>

    <function>1</function>

    <product id='0x1528'>Ethernet Controller 10-Gigabit X540-AT2</product>

    <vendor id='0x8086'>Intel Corporation</vendor>

    <iommuGroup number='34'>

      <address domain='0x0000' bus='0x84' slot='0x00' function='0x1'/>

    </iommuGroup>

  </capability>

</device>

 

5. Configure openstack to enable pci-passthrough:

#Nova controller

[root@osc ~]# vi /etc/nova/nova.conf

#

# Options defined in nova.pci.pci_request

#

 

# An alias for a PCI passthrough device requirement. This

# allows users to specify the alias in the extra_spec for a

# flavor, without needing to repeat all the PCI property

# requirements. For example: pci_alias = { "name":

# "QuicAssist",   "product_id": "0443",   "vendor_id": "8086",

# "device_type": "ACCEL" } defines an alias for the Intel

# QuickAssist card. (multi valued) (multi valued)

#pci_alias=

pci_alias={ "name":"X540NIC", "slabel":"dpdk-int"}

                            ①             ②             ③            ④

①name field, fixed

②name value, you can start it yourself

③I don’t know what it means, it seems that I started it myself

④I don’t know what it means, it seems that I started it myself

 

#Nova compute

[root@osc ~]# vi /etc/nova/nova.conf

#

# Options defined in nova.pci.pci_whitelist

#

 

# White list of PCI devices available to VMs. For example:

# pci_passthrough_whitelist =  [{"vendor_id": "8086",

# "product_id": "0443"}] (multi valued)

#pci_passthrough_whitelist=

pci_passthrough_whitelist={"vendor_id":"8086", "product_id":"1528", "slabel":"dpdk-int"}

                                                           ⑤              ⑥                ⑦               ⑧         ⑨             ⑩

⑤vendor_id field, fixed

⑥ The value of vendor_id, corresponding to ② in Table 3

⑦ product_id field, fixed

⑧ The value of product_id, corresponding to ③ in Table 3

⑨ I don't know what it means, but it should be consistent with the configuration of the controller

⑩I don’t know what it means, but it should be consistent with the configuration of the controller

 

After making changes, restart the OpenStack controller and compute services.

 

6. Create a flavor with pci-passthrough:

  Add metadata to flavor, the field name is "pci_passthrough:alias", the value is divided into two parts, the first part is the alias, which is consistent with ② in Table 5, and the second part is the quantity, which can be filled in according to the actual situation.

 

7. Create a virtual machine and confirm that the pci-passthrough device is assigned to the virtual machine

Copyright statement: This article is an original article by the blogger, welcome to reprint, but please be sure to indicate the author and source, thank you! http://blog.csdn.net/hangdongzhang/article/details/77745557

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326176205&siteId=291194637