Version 4 (coder, 16/02/2014 12:25)

1 1 coder
h1. TestArchitecture
2 1 coder
3 1 coder
4 1 coder
5 4 coder
libavg contains extensive unit tests that cover a very large part of the functionality. They are run on every call of make check and part of the continuous build. The tests are very fast - they run in about 10 seconds on a typical laptop. Considerable effort goes into the maintenance of the test suite, but as a result, we have several main benefits:
6 1 coder
7 1 coder
* Stability: Even subtle bugs are often detected by the tests before they land in production code.
8 1 coder
* Development speed: The development cycle (code change -> compile -> test) is extremely fast. Bugs are quickly pinpointed by the tests.
9 1 coder
* Cleaner architecture: The tests allow us to incrementally make changes to the libavg architecture without undue fear of introducing new bugs.
10 1 coder
11 1 coder
libavg tests come in two main flavors: High-level functional tests and internal unit tests. make check runs all available tests. In addition, individual tests can be invoked at varying levels of granularity to pinpoint errors.
12 1 coder
13 1 coder
14 1 coder
h2. High-Level Functional Tests
15 1 coder
16 4 coder
The functional tests are written in python and use the libavg API like any other python client would. The over 250 tests are divided into a number of test suites. There should be a test for every public API in libavg; feel free to report a bug if an API has no test. Ideally, all code paths in libavg should be exercised with a test - including error conditions.
17 1 coder
18 1 coder
The functional tests reside in their own directory, @src/test@. Just invoking
19 1 coder
20 1 coder
21 1 coder
$ ./Test.py
22 1 coder
23 1 coder
24 1 coder
runs all functional tests. Command-line parameters can be used to select a test suite or an individual test to run. This is extremely useful for pinpointing errors:
25 1 coder
26 1 coder
27 1 coder
$ ./Test.py image
28 1 coder
$ ./Test.py image testImageMipmap
29 1 coder
30 1 coder
31 1 coder
A list of available test suites is output if the command line contains an unknown suite. @Test.py@ has a number of additional command line parameters that can be used to set the graphics configuration to use. Run it with @--help@ to get a listing of the parameters.
32 1 coder
33 1 coder
h3. Result Images
34 1 coder
35 1 coder
Many of the tests rely on image comparisons to verify correct execution. The test compares a screenshot of the rendered scene with a baseline image checked into source control. If the images are 'similar enough' (determined by calculating average and standard deviation of the difference image), the test passes. Otherwise, it fails. Mismatches - even minor mismatches that don't cause a test failure - are saved in @src/test/resultimages@. Per mismatch, the test saves three images:
36 1 coder
37 1 coder
* a baseline image that shows what was expected,
38 1 coder
* the actual image generated and
39 1 coder
* a difference image.
40 1 coder
41 2 coder
42 1 coder
43 2 coder
The resultimages directory is easily inspected in a file browser set to preview with three or six images per line - see the image at the right for an example.
44 1 coder
45 3 michael.lotz_iart
The system ignores minor differences in images because there are several benign causes for these, mostly related to the libraries and systems that libavg builds upon. For instance, a lot of minor mismatches are caused by varying interpretations of the OpenGL standard on different platforms. Text rendering is an exception: The differences between platforms are too large. For this reason, image mismatches in text rendering don't cause test failures; the result images are simply saved for human inspection.
46 1 coder
47 1 coder
h3. Writing Tests
48 1 coder
49 1 coder
In code, test manifest themselves as methods in one of the test suites. These methods are registered at the bottom of each test source file. Here is a very simple test method:
50 1 coder
51 1 coder
<pre><code class="python">
52 1 coder
    def testSample(self):
53 1 coder
        def getFramerate():
54 1 coder
            framerate = player.getEffectiveFramerate()
55 1 coder
            self.assert_(framerate > 0)
56 1 coder
57 1 coder
        root = self.loadEmptyScene()
58 1 coder
        avg.ImageNode(href="rgb24-65x65.png", parent=root)
59 1 coder
60 1 coder
61 1 coder
                 lambda: self.compareImage("testsample"), 
62 1 coder
63 1 coder
64 1 coder
65 1 coder
Typical test code consists of three parts:
66 1 coder
67 1 coder
* Scene setup: In the sample, we simply create a scene containing one image node.
68 1 coder
* A series of commands to execute in successive rendered frames: The start method initializes libavg playback and takes a list of python callables that are invoked using an @ON_FRAME@ handler. In the sample, the test executes @getFramerate()@ in the first frame and compares the rendered image to a baseline image in the second frame. Then it terminates.
69 1 coder
* Local functions: Functions to be used during execution of this test.
70 1 coder
71 1 coder
The call to @compareImage()@ checks to see if the current screen contents are similar to an image found in @src/test@/baseline. Note that if the image doesn't exist in baseline, the results are placed in @resultimages/@. A new baseline image can simply be copied from there to @baseline@/ (obviously, the contents need to be inspected first!).
72 1 coder
73 1 coder
All test suites are derived from @testcase.AVGTestCase@, which in turn is derived from standard python @unittest.TestCase@. @AVGTestCase@ exposes a number of  additional entry points.
74 1 coder
75 1 coder
To test input and user interface functionality, test methods can use several @_sendXxxEvent@ functions to simulate mouse and touch events. There is also a generic @MessageTester@ class that can be used to check if publishers send out the messages they are expected to. A @MessageTester@ is initialized with a publisher and a collection of @MessageIDs@. It subscribes itself to the @MessageIDs@ and remembers which messages were sent. @MessageTester.isState()@ compares the messages sent to a baseline list of messages expected.
76 1 coder
77 1 coder
78 1 coder
h2. Low-Level Unit Tests
79 1 coder
80 1 coder
The low-level unit tests are C++ programs in the individual source directories. They are named with a test prefix and can be run by executing the program on a command line (no parameters). Running low-level tests can significantly speed up the compile->build->run cycle. This is done by compiling up to the directory in question and running the test. For instance, the basic OpenGL functionality is tested in @testgpu@; after changes in @src/graphics@, it is enough to run
81 1 coder
82 1 coder
83 1 coder
$ cd src/graphics
84 1 coder
$ make
85 1 coder
$ ./testgpu
86 1 coder
87 1 coder
88 1 coder
to verify that the basic functionality is in place.
89 1 coder
90 1 coder
The mechanisms are similar to the ones used for the high-level tests. Each unit test is written as a single C++ class, derived either from @Test@ or from @GraphicsTest@. The macros @TEST@, @TEST_FAILED@ and @QUIET_TEST@ are available to register test failures and successes. @GraphicsTest@ provides an additional @testEqual()@ function to compare images - similar to the python @compareImage()@ function explained above. Using the two overloads of this function, generated bitmaps can be compared either to a file or to another in-memory bitmap.