Direct vs Hostdev Interfaces in Nova


Mostly a note for myself. There are two types of SR-IOV’y networks supported in nova: direct and hostdev. Confusingly, the latter corresponds to passthrough of the virtual function (VF), while the former corresponds to macvtap. The difference between these is described rather succinctly in an Oracle whitepaper titled “Installing and Configuring KVM on Bare Metal Instances with Multi-VNIC”.


First, the hostdev VIF type:

The hostdev method is preferred for both performance and guest isolation reasons. It provides the guest with direct access to the PCI device, created as part of the configuration of SR-IOV on the hypervisor. A PCI device is known as a virtual function (VF)and represents an actual interface into the hardware of the hypervisor (bare metal instance). This allows the guest to have both maximum throughput and maximum isolation:

  • Maximum throughput because there is no operating system between the guest and the network

  • Maximum isolation because the hypervisor operating system is not involved beyond providing the hardware interface (the overhead is minimal)

The disadvantage of the hostdev method is that it isn’t possible to emulate a different device type. So, the guest operating system must have a driver available that matches the hardware type provided by the hypervisor.

As a user, you are likely to encounter the driver issues outlined above when using something like CirrOS image deployed by DevStack.


Then the direct VIF type which, again, is not really “direct”:

The direct method relies on hypervisor-configured network interfaces to provide connectivity to the guest operating systems. However, the network configuration provided by the hypervisor is minimal: the guest operating system still issues all the DHCP and related higher-level networking management, while the hypervisor simply provides an interface for the guest to operate on.

The direct method allows KVM to natively emulate some common network interface types that are typically found in most current and legacy operating systems. The following emulations have been observed to work: the e1000 (Intel FastEthernet driver) and the virtio (KVM native) device types, although the virtio driver might still require you to inject a driver into a Windows operating system. This is useful for prepackaged, virtual machines because their configurations are typically static and are looking for specific hardware types.

comments powered by Disqus