Use Pktgen-DPDK to send traffic between two Openstack VMs

As part of my new OpenShift-flavoured responsibilities, I’ve been trying to get DPDK-based applications running successfully on OpenShift-on-OpenStack. A pre-requisite step to this was getting DPDK configured on the host. With this done, I figured I’d investigate using an actual DPDK application in the guests. I’d done this before, though it’s been a while, and I did eventually get there. There were a few wrong turns along the way though, so I’ve documented the steps I finally settled on here in case they’re helpful to anyone else.

Cloud configuration

We’re going to be using OpenStack to provision our guests, so we obviously need an OpenStack cloud to point to. Given that we’re using DPDK in the guests, we also need to ensure that OVS-DPDK is used on the host(s). You can use a pre-existing cloud if you’re sure it provides the latter, but I decided to use dev-install along with a modified version of the sample OVS-DPDK configuration file to install a small, TripleO Standalone-based OpenStack deployment on a borrowed server.

Once you have you cloud selected, you need to ensure you have appropriate networks and an appropriate image and flavor. dev-install provides most of these for you, but can create them manually using the below commands.

First, the image. We’re going to use CentOS 8 Stream:

[[email protected]]$ wget https://cloud.centos.org/centos/8-stream/x86_64/images/CentOS-Stream-GenericCloud-8-20210210.0.x86_64.qcow2
[[email protected]]$ openstack image create \
    --disk-format qcow2 \
    --public \
    --file CentOS-Stream-GenericCloud-8-20210210.0.x86_64.qcow2 \
    centos8-stream

Next, the flavor. This is a pretty standard flavor except that we need to use pinned CPUs and enable hugepages:

[[email protected]]$ openstack flavor create \
    --vcpu 4 \
    --ram 8192 \
    --property hw:cpu_policy='dedicated' \
    --property hw:mem_page_size='large' \
    m1.large.nfv

You might also need to create a suitable network. dev-install configures a hostonly provider network. If using another method, you need to create your own provider network. Adding this is an exercise left to the reader. Once done, we need to create a tenant network that we’ll use to configure additional ports. Create this, along with a router:

[[email protected]]$ openstack network create internal_net
[[email protected]]$ openstack subnet create \
    --network internal_net \
    --subnet-range 192.168.200.0/24 \
    internal_subnet
[[email protected]]$ openstack router create router_a
[[email protected]]$ openstack router set --external-gateway hostonly router_a
[[email protected]]$ openstack router add subnet router_a internal_subnet

Finally, you need some way to access the guest. Create and upload a keypair:

[[email protected]]$ ssh-keygen
[[email protected]]$ openstack keypair create --public ~/.ssh/id_rsa.pub my-key

Initial server setup

We’re going to create two servers: guest-tx and guest-rx. These are effectively identical save for the configuration we’ll eventually pass to Pktgen-DPDK. Create the servers:

[[email protected]]$ openstack server create \
    --image centos8-stream \
    --flavor m1.large.nfv \
    --network internal_net \
    --key-name my-key \
    --wait \
    guest-tx
[[email protected]]$ openstack server create \
    --image centos8-stream \
    --flavor m1.large.nfv \
    --network internal_net \
    --key-name my-key \
    --wait \
    guest-rx

Create two floating IPs and attach them to the server so we can SSH into the machine:

[[email protected]]$ tx_fip=$(openstack floating ip create hostonly-dpdk -f value -c name | tr -d '\n')
[[email protected]]$ rx_fip=$(openstack floating ip create hostonly-dpdk -f value -c name | tr -d '\n')
[[email protected]]$ openstack server add floating ip guest-tx $tx_fip
[[email protected]]$ openstack server add floating ip guest-rx $rx_fip

You’ll also need to add security groups to enable SSH and potentially ICMP (for ping) access. If you deployed with dev-install like I did, then these will already be present. If you use another mechanism then you can create these security groups like so:

[[email protected]]$ openstack security group create --description allow_ssh allow_ssh
[[email protected]]$ openstack security group rule create --protocol tcp --dst-port 22 allow_ssh
[[email protected]]$ openstack security group create --description allow_ping allow_ping
[[email protected]]$ openstack security group rule create --protocol icmp allow_ping

You can then add these security groups to the servers:

[[email protected]]$ openstack server add security group guest-tx allow_ssh
[[email protected]]$ openstack server add security group guest-tx allow_ping
[[email protected]]$ openstack server add security group guest-rx allow_ssh
[[email protected]]$ openstack server add security group guest-rx allow_ping

You should now be able to SSH into the instances:

[[email protected]]$ openstack server ssh --login centos guest-tx

Attach DPDK interfaces

With the initial server configuration out of the way, we can move onto configuring the interfaces we will use for our DPDK application. It’s necessary to use secondary interfaces since we’re going to be binding these interfaces to the vfio-pci driver. The second we do this, the devices will no longer be usable by the kernel network stack which means, among other things, we cannot use SSH on this interface. With that said, there’s not going to be anything special about these interfaces: every interface attached to the instance will be vhostuser interface. We can confirm this by inspecting the single interface currently attached to the instance:

[[email protected]]$ sudo podman exec -it nova_libvirt virsh dumpxml 14 | xmllint --xpath '/domain/devices/interface' -

This will yield something like the following:

<interface type="vhostuser">
  <mac address="fa:16:3e:97:3d:11"/>
  <source type="unix" path="/var/lib/vhost_sockets/vhub20e827c-4a" mode="server"/>
  <target dev="vhub20e827c-4a "/>
  <model type="virtio"/>
  <driver rx_queue_size="1024" tx_queue_size="1024"/>
  <alias name="net0"/>
  <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0"/>
</interface>

With that bit out of the way, we can now add our additional network interface to each instance:

[[email protected]]$ openstack port create --network internal_net dpdk-port-tx
[[email protected]]$ openstack port create --network internal_net dpdk-port-rx

We’ll save the IP and MAC addresses for these interfaces: these will be useful later:

[[email protected]]$ tx_dpdk_ip=$(openstack port show -f json dpdk-port-tx | jq -r '.fixed_ips[0].ip_address')
[[email protected]]$ rx_dpdk_ip=$(openstack port show -f json dpdk-port-rx | jq -r '.fixed_ips[0].ip_address')
[[email protected]]$ tx_dpdk_mac=$(openstack port show -f json dpdk-port-tx | jq -r '.mac_address')
[[email protected]]$ rx_dpdk_mac=$(openstack port show -f json dpdk-port-rx | jq -r '.mac_address')

Now add them to the servers:

[[email protected]]$ openstack server add port guest-tx dpdk-port-tx
[[email protected]]$ openstack server add port guest-rx dpdk-port-rx

You can verify that these interfaces are of type vhostuser again, if you like.

Compile Pktgen-DPDK

We now have our servers effectively configured from an “infrastructure” perspective. Going forward, everything will be done inside the guests. The first of these steps is to compile Pktgen-DPDK. This is necessary because while DPDK itself is packaged for CentOS, Pktgen-DPDK is not. We’re going to demonstrate the steps to run for one intance. You should then repeat these on the second instance.

First, let’s SSH into the machine:

[[email protected]]$ openstack server ssh guest-tx --login centos
[[email protected]]$

Install the dependencies. These are numerous:

[[email protected]]$ sudo dnf config-manager --set-enabled powertools
[[email protected]]$ sudo dnf groupinstall -y 'Development Tools'
[[email protected]]$ sudo dnf install -y numactl-devel libpcap-devel meson driverctl
[[email protected]]$ python3 -m pip install --user pyelftools  # avoid requiring EPEL

Now, let’s clone and build DPDK. We could use the dpdk and dpdk-devel packages provided by CentOS but these are pretty old. If you decide to go this route, don’t forget to checkout a tag in the Pktgen-DPDK repo corresponding to the correct DPDK release in the environment.

You can build DPDK like so:

[[email protected]]$ git clone https://github.com/dpdk/dpdk
[[email protected]]$ cd dpdk
[[email protected]]$ git checkout v21.11rc3  # use v21.11 if it's been released
[[email protected]]$ meson build
[[email protected]]$ ninja -C build
[[email protected]]$ sudo ninja -C build install

With DPDK build, let’s do the same for Pktgen-DPDK:

[[email protected]]$ git clone https://github.com/pktgen/Pktgen-DPDK
[[email protected]]$ cd Pktgen-DPDK
[[email protected]]$ git checkout pktgen-21.11.0
[[email protected]]$ PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig meson build
[[email protected]]$ ninja -C build
[[email protected]]$ sudo ninja -C build install

Finally, restart the instance to propogate all these changes:

[[email protected]]$ sudo reboot

Configure environment

Now that our applications are ready to go, we need to configure our environment. There are a two parts to this: enabling hugepages and configuring our annointed interfaces to use the vfio-pci driver. Once again, these steps should be done on both instances.

We’re going to use root for most of these commands since it’s easier than working with pipes and sudo. Become root and configure the hugepages:

[[email protected]]$ sudo su
[[email protected]]# echo 1024 > /proc/sys/vm/nr_hugepages

Now the trickier job of rebinding our interfaces to use the vfio-pci driver. We’ve got two interfaces in both instances: the primary interface that we’re SSHing through, and the second interface that we’re going to use for our DPDK application. We can verify this using a combination of ip and lspci:

[[email protected]]# ip link
[[email protected]]# lspci | grep Ethernet

This should return something akin to the following:

[[email protected]]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether fa:16:3e:97:3d:11 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether fa:16:3e:5b:af:eb brd ff:ff:ff:ff:ff:ff
[[email protected]]# lspci | grep Ethernet
00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device
00:07.0 Ethernet controller: Red Hat, Inc. Virtio network device

In this instance, our device has been assigned the address 00:07.0. We can inspect the driver currently in use using lspci:

[[email protected]]# lspci -s 00:07.0 -v
00:07.0 Ethernet controller: Red Hat, Inc. Virtio network device
        Subsystem: Red Hat, Inc. Device 0001
        Physical Slot: 7
        Flags: bus master, fast devsel, latency 0, IRQ 10
        I/O ports at 1000 [size=32]
        Memory at c0040000 (32-bit, non-prefetchable) [size=4K]
        Memory at 240000000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at c0000000 [virtual] [disabled] [size=256K]
        Capabilities: [98] MSI-X: Enable+ Count=3 Masked-
        Capabilities: [84] Vendor Specific Information: VirtIO: <unknown>
        Capabilities: [70] Vendor Specific Information: VirtIO: Notify
        Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg
        Capabilities: [50] Vendor Specific Information: VirtIO: ISR
        Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg
        Kernel driver in use: virtio-pci

We’re using virtio-pci, but we need to be using vfio-pci. Let’s load that driver and then rebind the interface to the driver using the driverctl utility we used earlier:

[[email protected]]# modprobe vfio enable_unsafe_noiommu_mode=1
[[email protected]]# cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
Y
[[email protected]]# modprobe vfio-pci
[[email protected]]# driverctl -v set-override 0000:00:07.0 vfio-pci

If you inspect the device again, you should see that it’s now bound to the vfio-pci driver.

Run applications

The final step: running the applications we built and installed earlier. The pktgen application requires a small bit of configuration. Forunately everything we already have everything we need. Back on the host, run the following, which will create a configuration file using the MAC and IP address information we stored earlier:

$ cat << EOF > pktgen.pkt
stop 0
set 0 rate 0.1
set 0 ttl 10
set 0 proto udp
set 0 dport 8000
set 0 src mac ${tx_dpdk_mac}
set 0 dst mac ${rx_dpdk_mac}
set 0 src ip ${tx_dpdk_ip}/32
set 0 dst ip ${rx_dpdk_ip}
set 0 size 64
EOF

You can now copy this to the guest-tx instance:

$ scp pktgen.pkt [email protected]${tx_fip}:/home/centos

On the guest-rx instance, simply run:

LD_LIBRARY_PATH=/usr/local/lib64 /usr/local/bin/pktgen -l 1-3 -n 2 -- -P -m '2.0' -T

Then on the guest-tx instance, run:

LD_LIBRARY_PATH=/usr/local/lib64 /usr/local/bin/pktgen -l 1-3 -n 2 -- -P -m '2.0' -T -f pktgen.pkt

In the interactive prompt that appears, type start. You should see packets start flowing and being received by the pktgen instance running on guest-rx. You can stop this process by typing stop or simply quit.

References

comments powered by Disqus