[PEAK] DDT for Document-Driven Testing (was Re: proper unit testing)

Phillip J. Eby pje at telecommunity.com
Thu Nov 4 22:00:42 EST 2004


At 05:56 PM 11/4/04 -0500, R. David Murray wrote:
>So far I've been getting along with using integration tests run
>from my peak runIni script.  Now I'm trying to write proper
>unit tests to speed up my test cycle.

FYI, at some point you may want to check out the DDT package and commands, 
as well.  For example:

pje at pje-2 ~/PEAK
$ peak ddt
Usage: peak ddt inputfile.html [outputfile.html]

Process the tests specified by the input file, sending an annotated version
to the output file, if specified, or to standard output if not specified.
Both the input and output files may be filenames or URLs.

A summary of the tests' pass/fail scores is output to stderr, and the command's
exitlevel is nonzero if there were any problems.


ddt: Input filename required


Basically, 'peak.ddt' is an acceptance test framework in the spirit of the 
FIT acceptance test framework.  It works using HTML tables, whose first 
cell identifies the test to be run, and the rest contains the test 
data.  One of the other commands, which will be more useful to start, is:

     $ peak ddt.web src/peak/ddt/tests

This will launch your web browser with a list of test files.  Click on 
"Action_Test.html", and notice the coloring of the table cells, to indicate 
what values are correct and incorrect.  If you look at the original 
"Action_Test.html", you'll see it just has plain uncolored table cells; the 
coloration and result summary are added by DDT.

Basically, the system works by taking the first cell of any table, 
converting the text to a property name, and looking it up in 
'peak.ddt.processors' to obtain a processor instance.  There are many 
built-in processors, most optimized to work with peak.model and 
peak.storage.SQL.  For example, ActionChecker lets you simulate a UI 
performing actions against a model.Element, and verify the results.  I 
suggest looking at the 'peak.ddt.processors' section of peak.ini, the 
'peak.ddt.demos' module, and 'peak help ddt.processors' for more info on 
what you can do with these.

You may be wondering why you should go to the trouble of making HTML tables 
to specify test data that you could just as easily code up in Python unit 
tests.  Well, DDT isn't really for "you", it's for your "customers".

More precisely, it's intended that you actually write the test data tables 
into your requirements documents, using MS Word, Open Office, or anything 
else that has a reasonable way to export to HTML, and only generates tables 
where you have tables in the source document.  Then, your requirements 
document actually becomes a "test oracle", that can show the current 
progress of your system, relative to the requirements.

Equally important, with a little education, system analysts and others can 
be shown how to *change* the test data to reflect new requirements or 
changes in the existing requirements, by editing the source document.  By 
putting the source documents in a designated location on a server, and 
adding appropriate configuration, you can actually have a "test server", 
whereby anyone can simply go to the right URL, and see the current test 
results.  (Since DDT's browser-based modes are actually a simple 'peak.web' 
application.)

At this point, Ty and I have only used DDT in production for one project, 
but it worked out quite well for the situation.  We were involved in a 
database migration project where the DBA people doing the migration knew 
nothing about how the application was supposed to work.  So, we created DDT 
test documents that explained the existing functionality, with HTML tables 
listing example sequences of actions and expected results.  We then wrote a 
couple of simple project-specific test processors (an XML-RPC invoker, and 
a stored procedure invoker), and threw the requirements docs and some 
config files on a server.  We were then able to give the DBA folks some 
URLs where they could run the tests against the old database and the new 
database.  And, as soon as they made changes, they could rerun the tests to 
see whether their changes were effective.  The result was a much faster 
feedback loop.  And, because the tests were embedded in documentation about 
how things were supposed to work, there was less back-and-forth about why a 
particular thing needed to be a particular way.

Anyway, Document-Driven Testing is a very useful way to "bring people in" 
to the development process, by giving them information they can understand 
about what the system is supposed to do, and whether it's doing it right now.




More information about the PEAK mailing list