Recent versions of OpenStack nova have added support for real-time instances, that is, instances that provide the determinism and performance guarantees required by real-time applications. While this work was finally marked complete in the OpenStack Ocata release, it built upon lots of features added in previously releases.
The below is a guide that covers a basic, single-node deployment of OpenStack suitable for evaluating basic real-time instance functionality. We use CentOS 7, but the same instructions can be modified for RHEL 7 or Fedora, and any CentOS-specific aspects are called out. Also note that we’re using DevStack: you obviously shouldn’t be using this in production (I hear Red Hat OpenStack Platform is pretty swell!).
Host BIOS configuration
Configure your BIOS as recommended in the rt-wiki page. The most important steps are:
- Disable power management, including CPU sleep states
- Disable hyper-threading or any option related to logical processors
These are standard steps used in benchmarking as both sets of features can result in non-deterministic behavior.
Host OS configuration
-
Download and install CentOS 7.
-
Log in as
root
.Most of the following steps require root privileges. While you can do this with
sudo
, it’s generally easier to log in as theroot
user. Do this now.$ su -
-
Enable the
rt
repo.$ cat << EOF > /etc/yum.repos.d/CentOS-RT.repo # CentOS-RT.repo # # The Real Time (RT) repository. # [rt] name=CentOS-$releasever - rt baseurl=http://mirror.centos.org/centos/$releasever/rt/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=1 EOF $ sudo yum update -y
Most online guides will point you to a CERN repo for these packages. I had no success with this as some packages were missing. However, the steps to do this are below, just in case they’re helpful.
$ wget http://linuxsoft.cern.ch/cern/centos/7/rt/CentOS-RT.repo /etc/yum.repos.d/CentOS-RT.repo $ wget http://linuxsoft.cern.ch/cern/centos/7/os/x86_64/RPM-GPG-KEY-cern /etc/pki/rpm-gpg/RPM-GPG-KEY-cern $ sudo yum groupinstall RT
-
Install dependencies.
The most critical of these are
kernel-rt
andkernel-rt-kvm
, but these have dependencies of their own. When I was installing this, there was a conflict between the version installed by default (@anaconda
) and the one provided by thert
repo. To resolve this, I simply removed the conflicting version and installed the one provided by thert
repo.$ yum remove tuned $ yum install -y tuned-2.7.1-5.el7
After this, install the aforementioned dependencies along with some required by CentOS specifically.
$ yum install -y centos-release-qemu-ev $ yum install -y tuned-profiles-realtime tuned-profiles-nfv $ yum install -y kernel-rt.x86_64 kernel-rt-kvm.x86_64
-
Configure the realtime profile.
We want to isolate some cores from the kernel and will use the
tuned
application with the profiles installed above to do this.First, dump info about your NUMA topology.
$ lscpu | grep ^NUMA NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10 NUMA node1 CPU(s): 1,3,5,7,9,11
This processor, an Intel Xeon E5-2609-v3, has six cores and we’ve got two of them. We want to isolate some of these cores. CPU0 should be excluded from the possible cores as it handles console interrupts while a second core should be kept free for other host overhead processes. Let’s take a highly scientific approach and isolate four of the six cores from each socket because why not?
$ echo "isolated_cores=4-11" >> /etc/tuned/realtime-virtual-host-variables.conf
-
Load the realtime profile.
$ systemctl enable tuned $ systemctl start tuned $ tuned-adm profile realtime-virtual-host
You should confirm that the profile has been applied.
$ grep tuned_params= /boot/grub2/grub.cfg set tuned_params="isolcpus=4-11 nohz=on nohz_full=4-11 intel_pstate=disable nosoftlockup"
-
Configure huge pages.
First, add the following to
GRUB_CMDLINE_LINUX
in/etc/default/grub
.default_hugepagesz=1G
Save this configuration.
$ grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-327.13.1.el7.x86_64 done
Because we’re using a number of CPUs from each NUMA node, we want to assign a number hugepages to each node. We’re going to assign four per node.
$ echo 4 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages $ echo 4 > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages
We want to make this persistent. While you can configure persistent hugepages via the
GRUB_CMDLINE_LINUX
option, you cannot do this on a per-NUMA node basis. We’re going to use our ownsystemd
unit files to solve this problem until such a time as bug #1232350 is resolved. This solution is taken from that bug.$ cat << EOF > /usr/lib/systemd/system/hugetlb-gigantic-pages.service [Unit] Description=HugeTLB Gigantic Pages Reservation DefaultDependencies=no Before=dev-hugepages.mount ConditionPathExists=/sys/devices/system/node ConditionKernelCommandLine=hugepagesz=1G [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/lib/systemd/hugetlb-reserve-pages [Install] WantedBy=sysinit.target EOF $ cat << EOF > /usr/lib/systemd/hugetlb-reserve-pages #!/bin/bash nodes_path=/sys/devices/system/node/ if [ ! -d $nodes_path ]; then echo "ERROR: $nodes_path does not exist" exit 1 fi reserve_pages() { echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages } reserve_pages 4 node0 reserve_pages 4 node1 EOF $ chmod +x /usr/lib/systemd/hugetlb-reserve-pages $ systemctl enable hugetlb-gigantic-pages
-
Reboot the host to apply changes.
-
Verify that changes have been applied.
You want to ensure the
tuned
profile is loaded and the changes it has made have taken effect, such as addingisolcpus
and related parameters to the boot command. In addition, you want to make sure your own hugepage configuration has been applied.$ tuned-adm active Current active profile: realtime-virtual-host $ cat /proc/cmdline BOOT_IMAGE=/vmlinuz-3.10.0-327.18.2.rt56.223.el7_2.x86_64 root=/dev/mapper/rhel_virtlab502-root ro crashkernel=auto rd.lvm.lv=rhel_virtlab502/root rd.lvm.lv=rhel_virtlab502/swap console=ttyS1,115200 default_hugepagesz=1G isolcpus=3,5,7 nohz=on nohz_full=3,5,7 intel_pstate=disable nosoftlockup $ cat /sys/module/kvm/parameters/lapic_timer_advance_ns 1000 # this should be a non-0 value $ cat /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages 4 $ cat /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages 4
-
Verify that system interrupts are disabled.
You should install the
rt-tests
package, then run thehwlatdetect
utility it provides to validate correct behavior.$ yum install -y rt-tests $ hwlatdetect hwlatdetect: test duration 120 seconds parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 500000us Non-sampling period: 500000us Output File: None Starting test test finished Max Latency: 0us Samples recorded: 0 Samples exceeding threshold: 0
If this shows any samples exceeding threshold, something is wrong and you should retrace your steps.
-
Verify “real-time readiness”.
The
rteval
utility can be used to evaluate system suitability for RT Linux. It must be run for a long duration, so you should put this running and come back to it later.$ yum install rteval $ rteval --onlyload --duration=4h --verbose
Guest image configuration
We’re going to need a real-time image for the guest too. I did this manually on
another machine using virt-install
. Much of the configuration is duplicated
from the host.
-
Boot the guest and configure it using a
root
user.We don’t actually care about most of the configuration here wrt to RAM and CPU count since we’ll be changing this later. The only things to note are that we’re using the same OS as the host (CentOS) for ease-of-use and we have both network connectivity (so we can install packages) and a serial connection (so we can interact with the guest).
$ sudo virt-install \ --name centos7 \ --ram 4096 \ --disk path=./centos7.qcow2,size=8 \ --vcpus 4 \ --os-type linux \ --os-variant centos7.0 \ --network bridge=virbr0 \ --graphics none \ --console pty,target_type=serial \ --location 'http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1708.iso' \ --extra-args 'console=ttyS0,115200n8 serial' # ... follow prompts
-
Enable the
rt
repo.$ echo << EOF > /etc/yum.repos.d/CentOS-RT.repo # CentOS-RT.repo # # The Real Time (RT) repository. # [rt] name=CentOS-$releasever - rt baseurl=http://mirror.centos.org/centos/$releasever/rt/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=1 EOF $ sudo yum update -y
-
Install dependencies.
We naturally don’t need the
kernel-rt-kvm
module, but we do need thekernel-rt
package and some other dependencies. Seeing as we’re using CentOS for the guest too, we have to deal with the sametuned
dependency conflict.$ yum remove tuned $ yum install -y tuned-2.7.1-5.el7
After this, install the aforementioned dependencies along with some required by CentOS specifically.
$ yum install -y centos-release-qemu-ev $ yum install -y tuned-profiles-realtime tuned-profiles-nfv $ yum install -y kernel-rt.x86_64
-
Configure the realtime profile.
Configure the
tuned
profile toisolate the two CPUs we reserved for real-time in the flavour (i.e.^0-1
, so2
and3
)$ echo "isolated_cores=2,3" >> /etc/tuned/realtime-virtual-guest-variables.conf
-
Load the realtime profile.
$ systemctl enable tuned $ systemctl start tuned $ tuned-adm profile realtime-virtual-guest
Note that we’re using the guest profile here - not the host one.
You should confirm that the profile has been applied.
$ grep tuned_params= /boot/grub2/grub.cfg set tuned_params="isolcpus=2,3 nohz=on nohz_full=2,3 rcu_nocbs=2,3 intel_pstate=disable nosoftlockup"
-
Configure hugepages.
First, add the following to
GRUB_CMDLINE_LINUX
in/etc/default/grub
.default_hugepagesz=1G
Save this configuration.
$ grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-327.13.1.el7.x86_64 done
We don’t need to enable these as this will be done from the OpenStack side.
-
Install testing dependencies.
We’re going to be doing some testing later. Best to install these dependencies now.
$ yum install -y epel-release $ yum install -y rt-tests stress
-
Reboot the guest to apply changes.
-
Verify the changes have been applied.
Once again, you want to ensure the
tuned
profile is loaded and applied, and that the hugepages have been configured.$ tuned-adm active Current active profile: realtime-virtual-guest $ uname -a Linux guest.localdomain 3.10.0-693.2.2.rt56.623.el7.x86_64 #1 SMP PREEMPT RT Sun Jan 01 00:00:00 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux $ cat /proc/cmdline BOOT_IMAGE=/vmlinuz-3.10.0-693.2.2.rt56.623.el7.x86_64 root=/dev/mapper/centos-root ro rd.lvm.lv=centos/root rd.lvm.lv=centos/swap console=ttyS0,115200n8 default_hugepagesz=1G isolcpus=2,3 nohz=on nohz_full=2,3 rcu_nocbs=2,3 intel_pstate=disable nosoftlockup
-
Install OpenStack-specific dependencies.
We want to use
cloud-init
to configure stuff in OpenStack, so let’s install the various dependencies required. This is taken from the OpenStack docs.$ yum install -y acpid $ systemctl enable acpid $ yum install -y cloud-init cloud-utils-growpart $ echo "NOZEROCONF=yes" >> /etc/sysconfig/network
We don’t need to configure a console interface as
virt-install
has already done this for us.Once this is done, you can shutdown the guest.
$ poweroff
-
Clean up the image.
We want to strip stuff like MAC addresses from the guest. This should be done wherever you ran
virt-install
.$ sudo virt-sysprep -d centos7
If this is successful, you can undefine and shrink the image. It’s now ready for use later.
$ sudo virsh undefine centos7 $ sudo qemu-img convert -O qcow2 -c centos7.qcow2 centos7-small.qcow2
Nova configuration
-
Log back into your standard user.
We no longer need to run as root and DevStack, which I’m using here, will refuse to run this way.
-
Install and configure OpenStack.
I used DevStack for this, though you can use anything you want. This feature relies on features first included in the Pike release so you should deploy a suitable version. Given that I’m using DevStack, I’m simply going to use the
stable/pike
variant of DevStack and all dependencies.$ git clone https://github.com/openstack-dev/devstack/ $ cd devstack $ git checkout stable/pike $ echo << EOF > local.conf [[local|localrc]] GLANCE_V1_ENABLED=False CINDER_BRANCH=stable/pike GLANCE_BRANCH=stable/pike HORIZON_BRANCH=stable/pike KEYSTONE_BRANCH=stable/pike NEUTRON_BRANCH=stable/pike NEUTRON_FWAAS_BRANCH=stable/pike NOVA_BRANCH=stable/pike SWIFT_BRANCH=stable/pike ADMIN_PASSWORD=password DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD HORIZON_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD [[post-config|$NOVA_CONF]] [DEFAULT] firewall_driver=nova.virt.firewall.NoopFirewallDriver scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter vcpu_pin_set=4-11 EOF $ ./stack.sh # wait for successful deployment $ . openrc admin
You can use a mostly stock configuration with the exception of one configuration option:
[DEFAULT] vcpu_pin_set
. This should be configured for thenova-compute
service and should be set to the mask configured bytuned
earlier. -
Validate deployment.
Once this has deployed, you can check the logs of the
nova-compute
service to make sure thevcpu_pin_set
configuration has been successful. If deploying usingstable/pike
DevStack, you can do this usingjournalctl
.$ sudo journalctl -u [email protected] | grep 'vcpu_pin_set' | tail -1 vcpu_pin_set = 4-11 $ sudo journalctl -u [email protected] | grep 'Total usable vcpus' | tail -1 Total usable vcpus: 8, total allocated vcpus: 0
This is as expected, given that we we were using a
4-11
mask and have not yet deployed any instances.I’m sure there’s a better way to do this filtering with
journalctl
. -
Configure flavor.
Once you’ve verified everything, you can create your custom real-time flavor. To do this, first configure your environment variables.
$ openstack flavor create --vcpus 4 --ram 4096 --disk 20 rt1.small $ openstack flavor set rt1.small \ --property 'hw:cpu_policy=dedicated' \ --property 'hw:cpu_realtime=yes' \ --property 'hw:cpu_realtime_mask=^0-1' \ --property 'hw:mem_page_size=1GB'
By way of an explanation, these various properties correspond to the following.
hw:cpu_policy=dedicated
: This indicates that instances must have exclusive pCPUs assigned to them.hw:cpu_realtime=yes
: This indicates that instances will have a real-time policy.hw:cpu_realtime_mask="^0-1"
: This indicates that all instance vCPUs except vCPUs 0 and 1 will have a real-time policy.hw:mem_page_size=1GB
: This indicates that instances will have a sole 1 GB huge page assigned to them.For more information, refer to the nova docs.
-
Configure image.
We’re going to use the
centos7-small.qcow2
created previously. Upload this toglance
.$ openstack image create --disk-format qcow2 --container-format bare \ --public --file ./centos-small.qcow2 centos-rt
-
(Optional) Configure security groups and keypairs.
We want to ensure we can both ping the instance and SSH into it. This requires ICMP and TCP port 22 rules in the security group for the project. This is necessary because I installed using DevStack but may not be necessary using other deployment tools.
$ echo OS_PROJECT_NAME demo $ openstack project list | grep -w demo | f5a2496e6edf4ef4b5ffe62b01a8bf4b | demo | $ openstack security group list | grep -w f5a2496e6edf4ef4b5ffe62b01a8bf4b | 466ffc5e-114d-43a4-8854-db490c6b4571 | default | Default security group | f5a2496e6edf4ef4b5ffe62b01a8bf4b | $ openstack security group rule create --proto icmp \ 466ffc5e-114d-43a4-8854-db490c6b4571 $ openstack security group rule create --proto tcp --dst-port 22 \ 466ffc5e-114d-43a4-8854-db490c6b4571
In addition, we want to create a keypair so we can ssh into the instance.
$ openstack keypair create --public-key .ssh/id_rsa.pub default-key
Testing
Now we have everything configured, we’re going to create an instance and run our tests.
-
Boot instance
$ openstack server create --flavor rt1.small --image centos-rt \ --key-name default-key rt-server
This initially failed for me with the following error message:
Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied
I was able to resolve this with the following commands, taken from a related bugzilla.
$ sudo rmmod kvm_intel $ sudo rmmod kvm $ sudo modprobe kvm $ sudo modprobe kvm_intel
-
Connect floating IP.
This is necessary so we can SSH into the instance.
$ openstack floating ip create public +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | created_at | 2017-01-01T00:00:00Z | | description | | | fixed_ip_address | None | | floating_ip_address | 172.24.4.9 | | floating_network_id | 5e123439-bbe8-479b-ab32-cc66d1a34ae2 | | id | cb62400c-983f-4468-949c-a64fb6b47827 | | name | 172.24.4.9 | | port_id | None | | project_id | f5a2496e6edf4ef4b5ffe62b01a8bf4b | | revision_number | 0 | | router_id | None | | status | DOWN | | updated_at | 2017-01-01T00:00:00Z | +---------------------+--------------------------------------+ $ openstack server add floating ip rt-server 172.24.4.9
-
SSH to guest.
$ openstack server ssh rt-server --login centos
-
Run
cyclictest
to confirm expected latencies.We’re going to run a intensive process,
stress
, and then usecyclictest
to confirm that guest latencies are within expected limits.$ taskset -c 2 stress --cpu 4
This will result in four processes running on vCPU 2. Once this is running, start
cyclictest
in another tab.$ taskset -c 2 cyclictest -m -n -q -p95 -D 24h -h100 -i 200 > cyclictest.out
This will run for 24 hours (
-D 24h
). Once done, you can check the output (incyclictest.out
) to see if it’s within expected tolerances. The RT Wiki lists some example latencies so you can get an idea of what you can expect.