Single Root I/O Virtualization (SR-IOV) – Part 2
vSphere 5.1 and later supports Single Root I/O Virtualization (SR-IOV). SR-IOV is a specification that allows a single Peripheral Component Interconnect Express (PCIe) physical device under a single root port to appear to be multiple separate physical devices to the hypervisor or the guest operating system.
SR-IOV uses physical functions (PFs) and virtual functions (VFs) to manage global functions for the SR-IOV devices. PFs are full PCIe functions that include the SR-IOV Extended Capability which is used to configure and manage the SR-IOV functionality. It is possible to configure or control PCIe devices using PFs, and the PF has full ability to move data in and out of the device. VFs are lightweight PCIe functions that contain all the resources necessary for data movement but have a carefully minimized set of configuration resources.
SR-IOV-enabled PCIe devices present multiple instances of themselves to the guest OS instance and hypervisor. The number of virtual functions presented depends on the device. For SR-IOV-enabled PCIe devices to function, you must have the appropriate BIOS and hardware support, as well as SR-IOV support in the guest driver or hypervisor instance.
vSphere 5.1 supports SR-IOV. However, some features of vSphere are not functional when SR-IOV is enabled.
Supported Configurations
To use SR-IOV, your environment must meet the following configuration requirements:
To verify compatibility of physical hosts and NICs with ESXi releases, see the VMware Compatibility Guide.
Availability of Features
The following features are not available for virtual machines configured with SR-IOV:
■
|
|
■
|
|
■
|
|
■
|
|
■
|
|
■
|
|
■
|
|
■
|
|
■
|
|
■
|
|
■
|
|
■
|
|
■
|
Hot addition and removal of virtual devices, memory, and vCPU |
■
|
Supported NICs
The following NICs are supported for virtual machines configured with SR-IOV. All NICs must have drivers and firmware that support SR-IOV. Some NICs might require SR-IOV to be enabled on the firmware.
■
|
Products based on the Intel 82599ES 10 Gigabit Ethernet Controller Family (Niantic) |
■
|
Products based on the Intel Ethernet Controller X540 Family (Twinville) |
■
|
Upgrading from earlier versions of vSphere
If you upgrade from vSphere 5.0 or earlier to vSphere 5.1 or later, SR-IOV support is not available until you update the NIC drivers for the vSphere release. NICs must have firmware and drivers that support SR-IOV enabled for SR-IOV functionality to operate.
vSphere 5.1 and Virtual Function Interaction
Virtual functions (VFs) are lightweight PCIe functions that contain all the resources necessary for data movement but have a carefully minimized set of configuration resources. There are some restrictions in the interactions between vSphere 5.1 and VFs.
■
|
When a physical NIC creates VFs for SR-IOV to use, the physical NIC becomes a hidden uplink and cannot be used as a normal uplink. This means it cannot be added to a standard or distributed switch. |
■
|
There is no rate control for VFs in vSphere 5.1. Every VF could potentially use the entire bandwidth for a physical link. |
■
|
When a VF device is configured as a passthrough device on a virtual machine, the standby and hibernate functions for the virtual machine are not supported. |
■
|
Due to the limited number of vectors available for passthrough devices, there is a limited number of VFs supported on an vSphere ESXi host . vSphere 5.1 SR-IOV supports up to 41 VFs on supported Intel NICs and up to 64 VFs on supported Emulex NICs. The actual number of VFs supported depends on your system configuration. For example, if you have both Intel and Emulex NICs present with SR-IOV enabled, the number of VFs available for the Intel NICs depends on how many VFs are configured for the Emulex NIC, and the reverse. You can use the following formula to roughly estimated the number of VFs available for use: 3X + 2Y < 128 Where X is the number of Intel VFs, and Y is the number of Emulex VFs. |
■
|
If a supported Intel NIC loses connection, all VFs from the same physical NIC stop communication, including between VFs. |
■
|
If a supported Emulex NIC loses connection, all VFs stop communication with the external environment, but VF communication still functions. |
■
|
VF drivers offer many different features, such as IPv6 support, TSO, and LRO Checksum. See your vendor’s documentation for further details. |
DirectPath I/O vs SR-IOV
SR-IOV offers performance benefits and tradeoffs similar to those of DirectPath I/O. DirectPath I/O and SR-IOV have similar functionalty but you use them to accomplish different things.
SR-IOV is beneficial in workloads with very high packet rates or very low latency requirements. Like DirectPath I/O, SR-IOV is not compatible with certain core virtualization features, such as vMotion. SR-IOV does, however, allow for a single physical device to be shared amongst multiple guests.
With DirectPath I/O you can map only one physical funtion to one virtual machine. SR-IOV lets you share a single physical device, allowing multiple virtual machines to connect directly to the physical funtion.
This functionality allows you to virtualize low-latency (less than 50 microsec) and high PPS (greater than 50,000 such as network appliances or purpose built solutions) workloads on a VMWorkstation.
Reference:
- ESXi and vCenter Server 5.1 Documentation
- http://blog.scottlowe.org/2012/03/19/why-sr-iov-on-vsphere/