The Autotest scripts execute unit tests by making shell-like calls to
utilities, Python scripts and C unit test applications, and comparing their
return values (exit code, stdout and stderr) to predefined values. To do this,
Autotest defines a number of M4 macros, such as
An example of a test is given below. This test is from the Open vSwitch project, and tests the resubmit action in the datapath.
AT_SETUP([ofproto-dpif - resubmit]) OVS_VSWITCHD_START AT_DATA([flows.txt], [dnl table=0 in_port=1 priority=1000 icmp actions=output(10),resubmit(2),\ output(19),resubmit(3),output(21) table=0 in_port=2 priority=1500 icmp actions=output(11),resubmit(,1),\ output(16),resubmit(2,1),output(18) table=0 in_port=3 priority=2000 icmp actions=output(20) table=1 in_port=1 priority=1000 icmp actions=output(12),resubmit(4,1),\ output(13),resubmit(3),output(15) table=1 in_port=2 priority=1500 icmp actions=output(17),resubmit(,2) table=1 in_port=3 priority=1500 icmp actions=output(14),resubmit(,2) ]) AT_CHECK([ovs-ofctl add-flows br0 flows.txt]) AT_CHECK([ovs-appctl ofproto/trace br0 'in_port(1),eth(src=50:54:00:00:00:05,\ dst=50:54:00:00:00:07),eth_type(0x0800),ipv4(src=192.168.0.1,dst=192.168.0.2,\ proto=1,tos=0,ttl=128,frag=no),icmp(type=8,code=0)'], , [stdout]) AT_CHECK([tail -1 stdout], , [Datapath actions: 10,11,12,13,14,15,16,17,18,19,20,21 ]) OVS_VSWITCHD_STOP AT_CLEANUP
Autotest macros are just predefined M4 macros. There are a number of them, including:
There are many additional macros available to use. For a list of these, it’s probably best to check out the official GNU Autotest Manual.
Writing a sample test
“…to learn and not to do is really not to learn. To know and not to do is really not to know.”, Stephen R. Covey
The best way to learn this stuff is to do it. As such, we’re going to write a sample test script that will explain the basic functionality of the Autotest framework.
What we want to achieve
We want to test the
cat application. As with most shell applications, this
application provides an awful lot of functionality. We’re going to test only a
small subset of it’s functionality, and ignore all the other options and the
flags available to us. As such, we want to check that the following features
work as expected:
catprints an error message for a non-existing file
catprints nothing for an empty, existing file
catprints some output for a non-empty, existing file
The first thing we should do is declare our own macro to place tests in. This
will act as a function of sorts and allow us to call the tests at once or
from another file (plus it acts as a container to illustrate the difference in
different files). To do this, add the following code in a file called
m4_define([MYTEST_CHECK_CAT], ) MYTEST_CHECK_CAT
This works pretty straight-forwardly. When typed, the keyword
MYTEST_CHECK_CAT on the bottom line will be replaced with the lines in the
second parameter of the macro (currently none). Obviously, in order to make
this useful, we need something in the second parameter like so:
m4_define([MYTEST_CHECK_CAT], [ AT_BANNER() AT_SETUP() AT_CHECK(, , , ) AT_CLEANUP ]) MYTEST_CHECK_CAT
Replace the text in
mytest.at with the above code. You’ll notice we’ve placed
four new lines in the previous empty second parameter. As described above,
these lines are what will be used in place of the keyword defined by the second
parameter. The actual lines in question are merely empty Autotest Macros, as
seen above. These must be used with values, as seen in the next section.
The only test we’re writing here is for the following assertion:
cat prints an error message for a non-existing file
This test should just about do it:
m4_define([MYTEST_CHECK_CAT], [ AT_BANNER([cat simple unit tests]) AT_SETUP([execute cat with non-existing file]) AT_CHECK([cat /dev/nulls], [ignore], , ["cat: /dev/nulls: No such file or directory"]) AT_CLEANUP ]) MYTEST_CHECK_CAT
Each of the lines work as follow:
AT_BANNER([cat simple unit tests])
This merely describes some test that should be printed before the tests are executed. This is useful for providing a title to a group of tests and hence enforcing separation between them.
AT_SETUP([execute cat with non-existing file])
This describes the name of test in question. Most likely this is a brief description of the test.
AT_CHECK([cat /dev/nulls], [ignore], , ["cat: /dev/nulls: No such file or directory"])
This is the real juicy part. The first parameter describes what operation to
run. In this case, we’re running cat on a non-existent file (note the
/dev/nulls). The second parameter describes the expected status. I’m not
entirely sure what the status could be, so I’ll ignore it. The third parameter
describes the stdout. This application should output to stderr rather than
stdout in the case of an error, so leave it empty. Finally the last parameter
describes the stderr. This is what the application should output on calling
this command and we ensure this is so.
It isn’t possible to run this test as-is, because we’re missing a lot of
configuration stuff (like the
AT_INIT). However, if you’re writing your own
tests, you’re most likely plugging into an existing test framework. The
specifics of this will change from project to project but someone on the
project’s team should be able to advise you on the specifics of integration.