I’ve been playing around with quotas in OpenStack again. Every time I do, I encounter another strange bit of behavior that catches me out. This time, I’ve decided to write down these strange things so at least I have a reference to go back to at some point in the future. I should probably get these notes into the docs for nova, cinder and neutron at some point…
Project quotas and default quotas
Quotas are a complicated area with a lot of baggage. Broadly speaking, there
are two types of quota: default quotas and project-specific quotas. The default
quotas are applied to projects unless specifically overridden by project
quotas. In addition, for nova and cinder, there are two types of default quota:
API-configured default quotas and statically configured default quotas defined
in config files (so nova.conf
or cinder.conf
respectively). Neutron only
supports statically configured quotas (neutron.conf
). Project specific quotas
take precedence over API-configured default quotas (where available), which in
turn take priority over statically configured default quotas. We can visualize
this as such:
nova: conf default quotas < API default quotas < project quotas
cinder: conf default quotas < API default quotas < project quotas
neutron: conf default quotas < project quotas
User-specific quotas
Nova has another type of quota: user-specific quotas. There’s only one type of
user-specific quota in nova - keypairs
and they’re not something that nova is
likely to continue supporting in the long-term. As you’d expect, the primary
difference between between user-specific quotas and project-specific quotas is
that user-specific quotas are tied to the user instead of the project. This
means you can specify that user foo
has e.g. a keypair quota of 5
, while
user bar
has a keypair quota of 6
.
Quota classes
Finally, nova and cinder also have the concept of quota classes. The idea
behind these was to allow for a two-level hierarchy of quotas, where you could
define different “classes” of default quota and specific what class a project
would get. For example, you could have three quota classes - gold, silver and
bronze - and a project would get assigned to one of these quota classes
depending on how much the customer was paying. This was seen as an easier
alternative to setting defaults for each project individually. However,
actually using quota classes required a separate out-of-tree service that would
set a quota_class
attribute in the request context. Rackspace apparently had
one such service called Turnstile that did this but no one else appears to
have implemented something like this and efforts to remove the need for an
external service never went
anywhere. In effect, this feature
was never fully implemented and both nova and cinder only support the default
quota class while neutron never even tried implementing this. It’s an
irrelevance nowadays.
Quota usage
Obviously applying quotas to projects and users is one thing, but we actually
need to track that usage somewhere. This also happens at the project level.
The nova, cinder and neutron projects all track both a reserved
value and an
in_use
value. In nova’s case, the reserved
value was previously used to
reserve a resource at the API layer (nova-api
) before committing them at the
compute layer (nova-compute
). Nova moved away from this model in the 16.0.0
(Pike) release as part of the work to introduce cells v2, and the reserved
value will now always be 0
. This effort was tracked in the Count resources
to check quota in API for cells spec.
In the case of nova, much of this information is gleaned either from the database or the placement service, depending on configuration. I suspect most services use a similar model. This usage information is available to the user via various APIs or openstackclient.
Quota drivers
The nova, cinder and neutron projects all have the concept of quota drivers. By default, all projects use a DB-based quota driver but both nova and neutron offer alternative drivers. I haven’t gone into detail on these here since it’s not entirely relevant to this discussion. You should refer to the nova and neutron configuration documentation for more information if interested though.
How do I use the damn thing?
The best way to interact with quotas is using a tool that tries to abstract all of the above craziness from you. To this end, I’d recommend openstackclient. While OSC has supported quotas for years, recent versions of this (v6.1.0 or later) have improved the UX further and contain some important feature additions, like the ability to view quota usage for the cinder service.
Firstly, let’s create a project-specific quota for, say, the number of instances.
❯ openstack quota set --instances 5 $OS_PROJECT_NAME
Now we can inspect those quotas:
❯ openstack quota show $OS_PROJECT_NAME
+-----------------------+-------+
| Resource | Limit |
+-----------------------+-------+
| cores | 20 |
| instances | 10 |
| ram | 51200 |
| volumes | 10 |
| snapshots | 43 |
| gigabytes | 1000 |
| backups | 10 |
| volumes_lvmdriver-1 | -1 |
| gigabytes_lvmdriver-1 | -1 |
| snapshots_lvmdriver-1 | -1 |
| volumes___DEFAULT__ | -1 |
| gigabytes___DEFAULT__ | -1 |
| snapshots___DEFAULT__ | -1 |
| groups | 10 |
| networks | 100 |
| ports | 500 |
| rbac_policies | 10 |
| routers | 21 |
| subnets | 100 |
| subnet_pools | -1 |
| fixed-ips | -1 |
| injected-file-size | 10240 |
| injected-path-size | 255 |
| injected-files | 5 |
| key-pairs | 33 |
| properties | 128 |
| server-groups | 10 |
| server-group-members | 10 |
| floating-ips | 50 |
| secgroup-rules | 100 |
| secgroups | 10 |
| backup-gigabytes | 1000 |
| per-volume-gigabytes | -1 |
+-----------------------+-------+
You can also include usage information if you want:
❯ openstack quota show --usage $OS_PROJECT_NAME
+-----------------------+-------+--------+----------+
| Resource | Limit | In Use | Reserved |
+-----------------------+-------+--------+----------+
| cores | 20 | 2 | 0 |
| instances | 10 | 2 | 0 |
| ram | 51200 | 4096 | 0 |
| volumes | 10 | 1 | 0 |
| snapshots | 43 | 0 | 0 |
| gigabytes | 1000 | 5 | 0 |
| backups | 10 | 0 | 0 |
| volumes_lvmdriver-1 | -1 | 1 | 0 |
| gigabytes_lvmdriver-1 | -1 | 5 | 0 |
| snapshots_lvmdriver-1 | -1 | 0 | 0 |
| volumes___DEFAULT__ | -1 | 0 | 0 |
| gigabytes___DEFAULT__ | -1 | 0 | 0 |
| snapshots___DEFAULT__ | -1 | 0 | 0 |
| groups | 10 | 0 | 0 |
| networks | 100 | 2 | 0 |
| ports | 500 | 4 | 0 |
| rbac_policies | 10 | 4 | 0 |
| routers | 21 | 0 | 0 |
| subnets | 100 | 3 | 0 |
| subnet_pools | -1 | 2 | 0 |
| fixed-ips | -1 | 0 | 0 |
| injected-file-size | 10240 | 0 | 0 |
| injected-path-size | 255 | 0 | 0 |
| injected-files | 5 | 0 | 0 |
| key-pairs | 33 | 0 | 0 |
| properties | 128 | 0 | 0 |
| server-groups | 10 | 0 | 0 |
| server-group-members | 10 | 0 | 0 |
| floating-ips | 50 | 0 | 0 |
| secgroup-rules | 100 | 4 | 0 |
| secgroups | 10 | 1 | 0 |
| backup-gigabytes | 1000 | 0 | 0 |
| per-volume-gigabytes | -1 | 0 | 0 |
+-----------------------+-------+--------+----------+
Default quotas are applied to each project unless there are project-specific quotas to override them. These can be inspected also:
❯ openstack quota show --default
+-----------------------+-------+
| Resource | Limit |
+-----------------------+-------+
| cores | 20 |
| instances | 10 |
| ram | 51200 |
| volumes | 10 |
| snapshots | 43 |
| gigabytes | 1000 |
| backups | 10 |
| volumes_lvmdriver-1 | -1 |
| gigabytes_lvmdriver-1 | -1 |
| snapshots_lvmdriver-1 | -1 |
| volumes___DEFAULT__ | -1 |
| gigabytes___DEFAULT__ | -1 |
| snapshots___DEFAULT__ | -1 |
| groups | 10 |
| networks | 100 |
| ports | 500 |
| rbac_policies | 10 |
| routers | 10 |
| subnets | 100 |
| subnet_pools | -1 |
| fixed-ips | -1 |
| injected-file-size | 10240 |
| injected-path-size | 255 |
| injected-files | 5 |
| key-pairs | 33 |
| properties | 128 |
| server-groups | 10 |
| server-group-members | 10 |
| floating-ips | 50 |
| secgroup-rules | 100 |
| secgroups | 10 |
| backup-gigabytes | 1000 |
| per-volume-gigabytes | -1 |
+-----------------------+-------+
You can list quotas for multiple projects. Note that when doing this, only the projects with project-specific quotas are shown to avoid dumping potentially thousands of lines of duplicate, useless info to the terminal.
❯ openstack quota list --compute
+----------------------------------+-------+-----------+----------------+-----------------------------+--------------------------+-----------+-----------+----------------+-------+---------------+----------------------+
| Project ID | Cores | Fixed IPs | Injected Files | Injected File Content Bytes | Injected File Path Bytes | Instances | Key Pairs | Metadata Items | Ram | Server Groups | Server Group Members |
+----------------------------------+-------+-----------+----------------+-----------------------------+--------------------------+-----------+-----------+----------------+-------+---------------+----------------------+
| 700a8fa37f154153809be9d1814d8625 | 20 | -1 | 5 | 10240 | 255 | 5 | 33 | 128 | 51200 | 10 | 10 |
+----------------------------------+-------+-----------+----------------+-----------------------------+--------------------------+-----------+-----------+----------------+-------+---------------+----------------------+
Finally, you can unset configured quotas:
❯ openstack quota delete $OS_PROJECT_NAME
Addendum! Unified limits
If you consider the above rather confusing, you should try to using it in real-world environments! The deficiencies with the existing quota model were brought up in multiple discussions with operators and end users over the years, and the end result of these discussions was a concept known as unified limits. As the name would suggest, the aim of this feature is to provide a unified model for maintaining resource limits across multiple services. Good documentation exists to describe the benefits of this approach over the status quo. Adding support for unified limits requires significant effort on the end of the individual projects, and so far only the compute project appears to have started down this path, introducing preliminary support for unified limits during the yoga release. This remains in development and I’m not sure when we plan to finish it. The above guide will probably be useful for some time to come 😅