GuideToReadingSpecifications

FixtureFixture specifications take a little effort to learn to read, as they are very compact. They're intended for implementors of Fit and of general-purpose fixtures.

I have used such tests to drive the development of several fixtures, including most of the ones in the FitLibrary. I also bootstrapped FixtureFixture with tests in itself.

Latest news: I am in the process of converting the tests to use SpecifyFixture, which makes use of embedded tables. This makes the tests much easier to read as you can see directly what's expected.

Why bother?


For tests that are expected to pass, you can simply write the tests directly without FixtureFixture. But if tests are expected to fail, such as with a wrong value or an exception, it's a pain to have to manually check that the results reported are as expected.

For example, [[plus()][^ColumnFixtureUnderTest]] adds the values of a and b:

fit.specify.ColumnFixtureUnderTest
abplus()
01213
123

We expect that "13" is to be considered wrong and that the cells is to be colored red (run the test and see). The "3" is right and is to be colored green.

Automating the test


We can use FixtureFixture to define the expected colorings of that table:

fit.ff.FixtureFixture
fixturefit.specify.ColumnFixtureUnderTest
---abplus()
--w01213
--r123

We've wrapped the table above inside a FixtureFixture table (the inner table is shown in bold above). The second row of the whole table defines the fixture to be tested. The first column of the following rows specifies the expected coloring of the cells in the rest of that row:
When you run the test, you'll see that the first column is colored green, and that the internal colorings are not included in the counts (1 right and 1 wrong are from the first table above):

fit.Summary

This means that we can have a whole suite of tests that are expected to all pass, even though the embedded tables don't.

What are we testing


In order to test the fixture, we need an underlying System Under Test (SUT) with simple and known behavior. So our tests check that the fixture works appropriately for that SUT. This is a little backwards from usual tests, but follows the general principle that testing involves checking the consistency of two distinct "systems".

Expecting errors and exceptions


An error is given for the value "thirteen" because a number is expected.
fit.specify.ColumnFixtureUnderTest
abplus()
012thirteen

We can specify that with an "e":
fit.ff.FixtureFixture
fixturefit.specify.ColumnFixtureUnderTest
abplus()
--e012thirteen

Added Text


We can also check that error messages are added to a cell, as expected. Consider the added text in the first table above. We can specify that as follows:

fit.ff.FixtureFixture
fixturefit.specify.ColumnFixtureUnderTest
abplus()
--w01213
report01213 expected12 actual
--r123
report3

We've added two extra report rows, which don't add to the inner table. Instead, these rows specify the text that's expected to be reported in the row above. We expect the "13" cell to have " expected12 actual" to be added; notice that this doesn't include the HTML line. On the other hand, we expect the "3" cell to be unchanged. Any empty cells in a report row, as we see in the last row, mean that that cell is not to be checked.

Added Cells and Rows


Cells added to a row can be checked with report.

So that leaves added rows. SummaryFixture is a good example of a fixture that adds rows:

fit.Summary

We can specify that as follows:

fit.ff.FixtureFixture
fixturefit.Summary
I-w
reportcounts2 right, 2 wrong, 0 ignored, 1 exceptions
I
reportrun date
I
reportrun elapsed time

The "I" rows specify that a row will be inserted at that point, along with any colorings. The report following applies to that inserted row. Notice that we don't check the run date value, as it depends on when the test is run.

Why do the counts differ in the two tables above? It's because when fit.Summary is in an inner table, it reports the counts for all previous inner tables. That way, the sequence of inner tables act the same, whether or not they are embedded or stand alone.

Why aren't the tests more specific?


There are currently differences in the way that some errors are reported in the Fit within FitNesse and in Fit from http://fit.w2.com. So some definitions/tests currently just expect an exception, but don't check the details.

More on FixtureFixture


For more details of FixtureFixture, see: