The OpenStack SDK library provides a unified API to interact with
OpenStack clouds. I’ve been doing a lot of work on it lately and am only now
starting to gain an understanding of just what’s going on (there are fewer
layers in an onion 😊). These are my notes on how to use the Resource
-based
objects found in e.g. openstack/compute/v2/server.py
.
Intro to the Resource object
As the name would suggest, the Resource
object wraps a type of API resource
or collection of API resources. For example, take nova’s /servers
API. This
API supports a number of CRUD-style operations:
GET /servers
(list servers)POST /servers
(create server)GET /servers/{id}
(fetch server)PUT /servers/{id}
(update server)DELETE /servers/{id}
(delete server)
If you were to define a simple Resource
definition for this API, it would
likely look something like this:
class Server(resource.Resource):
# API path
base_path = '/servers'
# envelope parameters
resource_key = 'server'
resources_key = 'servers'
# capabilities
allow_create = True
allow_fetch = True
allow_commit = True
allow_delete = True
allow_list = True
# attributes
access_ipv4 = resource.Body('accessIPv4')
access_ipv6 = resource.Body('accessIPv6')
# ...
There’s a lot of abstraction going on here, and it’s obviously far from
complete (look at openstack/compute/v2/server.py
in the openstacksdk
project if you want the real deal), but there are a couple of crucial
components here. Firstly, we’re giving the path to the API:
# API path
base_path = '/servers'
This is the base path, which is extended with additional path components depending on the operation.
Next up, we’re stating the keys used for the envelope:
# envelope parameters
resource_key = 'server'
resources_key = 'servers'
Pretty much all OpenStack APIs use envelopes, by which we mean all
responses are returned with a JSON object on the outside. The above
configuration means the responses for operations that work with multiple
resources (so just GET /servers
in this case) will be accessible via the
servers
key, while those that work with individual resources will be
accessible via the server
key. If we look at the nova api-ref we can see
this is indeed the case. For example, consider a typical response for
GET /servers
(list servers):
{
"servers": [
{
"id": "22c91117-08de-4894-9aa9-6ef382400985",
"links": [
{
"href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985",
"rel": "self"
},
{
"href": "http://openstack.example.com/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985",
"rel": "bookmark"
}
],
"name": "new-server-test"
}
],
"servers_links": [
{
"href": "http://openstack.example.com/v2.1/6f70656e737461636b20342065766572/servers?limit=1&marker=22c91117-08de-4894-9aa9-6ef382400985",
"rel": "next"
}
]
}
And an equivalent response for GET /servers/{id}
(fetch server):
{
"server": {
"OS-DCF:diskConfig": "AUTO",
"OS-EXT-AZ:availability_zone": "nova",
...
}
}
Finally, you have the allowed operations and the fields or attributes of the server resource:
# capabilities
allow_create = True
allow_fetch = True
allow_commit = True
allow_delete = True
allow_list = True
# attributes
access_ipv4 = resource.Body('accessIPv4')
access_ipv6 = resource.Body('accessIPv6')
# ...
The attributes are a fairly simple mapping from the resource to some attribute
of the API requests and responses and in essence allow us to map e.g. the
accessIPv4
field in API requests and responses to the access_ipv4
attribute
of the Server
object. More interestingly though, for the purposes of this
post, are the capabilities. The value of these capabilities defines whether the
following aptly named methods from the Resource
class are usable or not:
create
fetch
commit
delete
list
We’ll go into details on how to use these shortly - that is, after all, the
main point of this post - but suffice to say you can make a call like
Server.list(...)
and it will return a list of Server
objects. In any case,
the servers API supports all of the CRUD-style operations and we’re stating as
much here through this configuration. This means a user can use any of these
CRUD methods (with correct input and configuration, of course) and expect them
to work. This isn’t always the case. If, for example, this API did not support
a user updating an existing server then we could configure allow_commit = False
(or simply not define this attribute resulting in the default value of
False
being used).
With this small introduction to the Resource
object complete, let’s look at
how you’d actually use this.
Using Resource objects
Let’s begin by saying most users won’t actually need to use Resource
objects
directly. openstacksdk consists of multiple layers, and most users will get
away with using what’s known as the proxy layer. This is a utility layer that
provides a number of easy API helpers such as create_server
(to create a new
server) or flavors
(to list servers) that a user can use to interact with
their cloud. An example from the README:
import openstack
# Initialize and turn on debug logging
openstack.enable_logging(debug=True)
# Initialize connection
conn = openstack.connect(cloud='mordred')
for server in conn.compute.servers():
print(server.to_dict())
It also provides an even higher-level layer, known as the cloud layer, which is used by things like Ansible’s OpenStack modules and can be used to wrap multiple complicated operations. Another example from the README:
import openstack
# Initialize and turn on debug logging
openstack.enable_logging(debug=True)
# Initialize connection
conn = openstack.connect(cloud='mordred')
for server in conn.list_servers():
print(server.to_dict())
Sometimes it can be useful to understand how these things work under the hood
though, so let’s have a look at the ways to use Resource
objects directly,
from most complicated (and therefore most informative) to most abstract.
We said above that the definition of the capabilities attributes on the
resource - allow_create
, allow_fetch
etc. - meant that users could use
equivalent methods on the resource such as create
. Let’s begin by attempting
to use one of these on the Server
resource actually provided by
openstacksdk:
>>> from openstack.compute.v2 import server
>>> server.Server.list()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: list() missing 1 required positional argument: 'session'
Seems pretty legit. We need a “session” parameter. We can look at the docstring for this method to figure out what that should be:
>>> help(server.Server.list)
Help on method list in module openstack.resource:
list(session, paginated=True, base_path=None, allow_unknown_params=False, **params) method of builtins.type instance
This method is a generator which yields resource objects.
This resource object list generator handles pagination and takes query
params for response filtering.
:param session: The session to use for making this request.
:type session: :class:`~keystoneauth1.adapter.Adapter`
...
There are a couple of ways to generate a suitable parameter. Let’s start with
most verbose first. We can manually create a keystoneauth1.session.Session
object, wrap this in openstacksdk’s openstack.proxy.Proxy
object (which is
itself a wrapper around keystoneauth’s keystoneauth1.adapter.Adapter
object)
and then use this on the relevant Resource
object. An example:
>>> from keystoneauth1.identity import v3
>>> import keystoneauth1.session
>>> from keystoneclient.v3 import client
>>> from openstack.compute.v2 import server
>>> import openstack.proxy
>>> auth = v3.Password(
... auth_url='http://172.20.4.155/identity',
... username='admin',
... password='password',
... project_name='demo',
... user_domain_id='default',
... project_domain_id='default')
>>> session = keystoneauth1.session.Session(auth=auth)
>>> proxy = openstack.proxy.Proxy(
... session=session,
... service_type='compute',
... interface='public',
... version='2.1')
>>> print([x.name for x in server.Server.list(session=proxy)])
['test-server']
This aligns with what I’m seeing if I run openstack server list
:
$ openstack server list -f value -c Name
test-server
Hurrah! However, not only is this super verbose but it’s using hard-coded cloud
details such as keystone details and the version that I extracted from the
openrc
file laid down by DevStack. This seems unnecessary: you don’t need to
manually specify any of these when using the clients so why do we need to use
that here. In fact, it is unnecessary. Not only can openstacksdk parse the
environment variables configured via the openrc
file but it also supports (an
in fact prefers) a clouds.yaml
file, which on a DevStack deployment can be
found at /etc/openstack/clouds.yaml
. An abbreviated example from my DevStack
deployment:
$ cat /etc/openstack/clouds.yaml
clouds:
devstack:
auth:
auth_url: http://172.20.4.155/identity
password: password
project_domain_id: default
project_name: demo
user_domain_id: default
username: demo
identity_api_version: '3'
region_name: RegionOne
volume_api_version: '3'
devstack-admin:
...
devstack-alt:
...
devstack-system-admin:
...
functional:
image_name: cirros-0.5.2-x86_64-disk
openstacksdk provides a helpful little tool to show the configuration it’s
able to identify automatically, the openstack.config.loader
module:
$ python -m openstack.config.loader
devstack None {'api_timeout': None, ..., 'networks': []}
devstack-admin None {'api_timeout': None, ..., 'networks': []}
devstack-alt None {'api_timeout': None, ..., 'networks': []}
devstack-system-admin None {'api_timeout': None, ..., 'networks': []}
envvars None {'api_timeout': None, ..., 'networks': []}
You’ll note that most of these correspond to entries in the clouds.yaml
file
but there’s also an additional entry - envvars
- which corresponds to the
cloud configuration sourced from environment variables which were configured
via the openrc
file. Very cool.
Knowing that we have the above, we can use the above example once again but
this time use openstacksdk to generate the Proxy
object for us:
>>> import openstack.config.loader
>>> from openstack.compute.v2 import server
>>> config = openstack.config.loader.OpenStackConfig().get_one('devstack')
>>> session = config.get_session_client('compute')
>>> print([x.name for x in server.Server.list(session=proxy)])
['test-server']
This is far less verbose that the previous example and doesn’t require
hardcoding any configuration into our scripts, or reinventing the wheel with
regard to parsing environment variables or clouds.yaml
files. Instead, we’re
saying we should use the configuration for the devstack
cloud from
clouds.yaml
. If we wanted to, we could also use the envvars
“cloud” or any
of the other devstack-*
clouds. Ultimately though, it saves us a lot of code.
We’re not done though. We can make this even easier by using the Connection
object from the openstack.connection
module. You likely missed it, but we
already created one of these objects in our brief example on using the proxy
and cloud layers above:
import openstack
# ...
conn = openstack.connect(cloud='mordred')
# ...
That does an awful lot for us including generating suitable Proxy
objects. We
can use it in place of the OpenStackConfig.get_one
and get_session_client
calls above:
>>> import openstack
>>> from openstack.compute.v2 import server
>>> conn = openstack.connect(cloud='devstack')
>>> print([x.name for x in server.Server.list(session=proxy)])
['test-server']
Wonderfully concise.