GNU Autotest

GNU Autotest is a test framework that, together with supporting scripts and unit test files, can unit test an application. Autotest is part of the Autotools library, a.k.a. the GNU Build System.

The Autotest scripts execute unit tests by making shell-like calls to utilities, Python scripts and C unit test applications, and comparing their return values (exit code, stdout and stderr) to predefined values. To do this, Autotest defines a number of M4 macros, such as AT_INIT and AT_CLEANUP.

An example of a test is given below. This test is from the Open vSwitch project, and tests the resubmit action in the datapath.

AT_SETUP([ofproto-dpif - resubmit])
AT_DATA([flows.txt], [dnl
table=0 in_port=1 priority=1000 icmp actions=output(10),resubmit(2),\
table=0 in_port=2 priority=1500 icmp actions=output(11),resubmit(,1),\
table=0 in_port=3 priority=2000 icmp actions=output(20)
table=1 in_port=1 priority=1000 icmp actions=output(12),resubmit(4,1),\
table=1 in_port=2 priority=1500 icmp actions=output(17),resubmit(,2)
table=1 in_port=3 priority=1500 icmp actions=output(14),resubmit(,2)
AT_CHECK([ovs-ofctl add-flows br0 flows.txt])
AT_CHECK([ovs-appctl ofproto/trace br0 'in_port(1),eth(src=50:54:00:00:00:05,\
proto=1,tos=0,ttl=128,frag=no),icmp(type=8,code=0)'], [0], [stdout])
AT_CHECK([tail -1 stdout], [0],
  [Datapath actions: 10,11,12,13,14,15,16,17,18,19,20,21


m4 macros

Autotest macros

Autotest macros are just predefined M4 macros. There are a number of them, including:

Additional macros

There are many additional macros available to use. For a list of these, it’s probably best to check out the official GNU Autotest Manual.

Writing a sample test

“…to learn and not to do is really not to learn. To know and not to do is really not to know.”, Stephen R. Covey

The best way to learn this stuff is to do it. As such, we’re going to write a sample test script that will explain the basic functionality of the Autotest framework.

What we want to achieve

We want to test the cat application. As with most shell applications, this application provides an awful lot of functionality. We’re going to test only a small subset of it’s functionality, and ignore all the other options and the flags available to us. As such, we want to check that the following features work as expected:

  1. cat prints an error message for a non-existing file
  2. cat prints nothing for an empty, existing file
  3. cat prints some output for a non-empty, existing file

Initial setup

The first thing we should do is declare our own macro to place tests in. This will act as a function of sorts and allow us to call the tests at once or from another file (plus it acts as a container to illustrate the difference in different files). To do this, add the following code in a file called

m4_define([MYTEST_CHECK_CAT], [])


This works pretty straight-forwardly. When typed, the keyword MYTEST_CHECK_CAT on the bottom line will be replaced with the lines in the second parameter of the macro (currently none). Obviously, in order to make this useful, we need something in the second parameter like so:

m4_define([MYTEST_CHECK_CAT], [
  AT_CHECK([], [], [], [])


Replace the text in with the above code. You’ll notice we’ve placed four new lines in the previous empty second parameter. As described above, these lines are what will be used in place of the keyword defined by the second parameter. The actual lines in question are merely empty Autotest Macros, as seen above. These must be used with values, as seen in the next section.

The test

The only test we’re writing here is for the following assertion:

cat prints an error message for a non-existing file

This test should just about do it:

m4_define([MYTEST_CHECK_CAT], [
  AT_BANNER([cat simple unit tests])
  AT_SETUP([execute cat with non-existing file])
  AT_CHECK([cat /dev/nulls], [ignore], [], ["cat: /dev/nulls: No such file or directory"])


Each of the lines work as follow:

AT_BANNER([cat simple unit tests])

This merely describes some test that should be printed before the tests are executed. This is useful for providing a title to a group of tests and hence enforcing separation between them.

AT_SETUP([execute cat with non-existing file])

This describes the name of test in question. Most likely this is a brief description of the test.

AT_CHECK([cat /dev/nulls], [ignore], [], ["cat: /dev/nulls: No such file or directory"])

This is the real juicy part. The first parameter describes what operation to run. In this case, we’re running cat on a non-existent file (note the s in /dev/nulls). The second parameter describes the expected status. I’m not entirely sure what the status could be, so I’ll ignore it. The third parameter describes the stdout. This application should output to stderr rather than stdout in the case of an error, so leave it empty. Finally the last parameter describes the stderr. This is what the application should output on calling this command and we ensure this is so.

Wrap up

It isn’t possible to run this test as-is, because we’re missing a lot of configuration stuff (like the AT_INIT). However, if you’re writing your own tests, you’re most likely plugging into an existing test framework. The specifics of this will change from project to project but someone on the project’s team should be able to advise you on the specifics of integration.

comments powered by Disqus