In-memory database fixtures with Clojure and sqlite

For a recent project, I need to extract data from a sqlite3 database. Writing the Clojure code to retrieve data was very straight-forward with clojure.java-jdbc and java-jdbc/dsl. Naturally, I wanted to have some tests for this code as well. In a previous Python project, I had a lot of fun using sqlite’s in-memory feature to run very speedy database tests, so of course I wanted this for my current Clojure project, too. This turned out not to be so easy as I had expected though, so I’m documenting it here for the next naive soul. My initial attempt with clojure.java.jdbc, java-jdbc/dsl and midje looked basically like this:

    (def testdbspec
      {:subprotocol "sqlite"
       :subname ":memory:"})

    (defn make-bookmark-table []
      (jdbc/with-transaction [db testdbspec]
         (jdbc/db-do-prepared db
           (ddl/create-table :bookmarks
                      [:id :int :primary :key]
                      [:type :int]
                      [:title "longvarchar"]))))

    (defn add-bookmark []
      (jdbc/with-transaction [db testdbspec]
         (jdbc/db-do-prepared db
           (str 
              "INSERT INTO bookmarks (id, type, title) "
              "VALUES ('12453', '2', 'a bookmark')"))))

    (defn setup-database [db]
       (make-bookmark-table)
       (add-bookmark)

    (facts "Testing database access to bookmarks"
       (with-state-changes [(before :facts (setup-database))]
            (fact "We can retrieve a list of bookmarks"
                (fetch-tags :dataspec testdb) => [{:title "a bookmark"}]))))

This will fail quite early, because basically as soon as the with-transaction in make-bookmark-table has finished its work, the connection to the database will be closed. As a result, when the next with-transaction or jdbc\query is run, you’ll connect to a fresh in-memory database which doesn’t have the tables you just created. My old Python test code didn’t have this problem, because the setUp method of the TestCase would create the database connection (via sqlalchemy’s create_engine) and would keep it alive until the TestCase tearDown method would run.

I tried giving back the database connection from make-bookmark-table, but this just results in a “connection closed” error. Unfortunately, clojure.java.jdbc doesn’t support opening and closing the connection yourself. Sure, you can use get-connection, but you can’t feed this into either with-transaction or query. query uses with-open internally, which will conveniently close the connection for you. In a post on the perils of dynamic scope Stuart Sierra calls this the Dynamically-Scoped Singleton Resource and files it under ‘anti-pattern’. I gotten bitten pretty exactly by what Stuart describes: when dealing with sqlite’s in-memory feature, we would like to manage the connection ourselves, but we can’t.

After banging my head against this for a while, the only option I could come up with resorts to extract the relevant with-transaction from the setup code. Instead you have to wrap the tests with the transaction and then call the setup code, like this:

    (defn make-bookmark-table [db]
      (jdbc/db-do-prepared db
           (ddl/create-table :bookmarks
                      [:id :int :primary :key]
                      [:type :int]
                      [:title "longvarchar"])))

    (defn setup-tables [db]
       (make-bookmark-table db))

    (defn add-bookmark [db]
       (jdbc/db-do-prepared db
           (str 
              "INSERT INTO bookmarks (id, type, title) "
              "VALUES ('12453', '2', 'a bookmark' )")))

    (defn remove-bookmark [db]
       (jdbc/db-do-prepared db
            (str "DELETE FROM bookmarks WHERE id = '12453")))

    (facts "Testing database access to bookmarks"
       (jdbc/with-db-transaction [db testdbspec]
            (setup-tables db)
            (with-state-changes [(before :facts (add-boomark db))
                                          (after :facts (remove-boomark db))]
                 (fact "We can retrieve a list of bookmarks"
                     (fetch-tags :dataspec db) => [{:title "a bookmark"}]))))

This works as expected.


Version control and testing

What have version control and testing to do with each other? Well, first of all, both are common virtues in the clean code community. What you’ll find is that both virtues are important on their own ground: version control provides a safety guard in that you can roll back to prior versions if you accidently introduce problems in your code. Testing (automated unit tests) provides a safety guard, too, because you can do regression testing when you work with your code. These are both fine goals but seemingly have little to do with each other.

But in reality they do. For sake of argument, let’s take a step back and assume that you have to work in an environment of several developers where neither of these things exists. What will you likely see? What we all have seen several years ago. Commented out code blocks, redundant and often misleading or outdated comments, timestamps with comments cluttered all over the code. And frightened developers that feared each minor change because of the miriad of subtle side effects it might have, let alone major changes to core components. It’s an environment in which refactorings as welll as extensions are very hard and expensive, which results in frightened overworked developers and frustrated managers.

So, what happens when you introduce only one of those virtues? Say, we introduce version control. Now, every change gets documented, except that documenting every change requires, from the developers point of view, documentation at the wrong point. They can’t see the documented changes and the reason for these changes in the source, they see it only in the version control system — iff they add a change message with every change at all. Much more likely is that you will see commit messages such as “.” or “bug fix”, and the same old mess of timestamps, outdated comments and commented out code as before. Why is that? Because your developers are now not as frightened as they used to be (because they can now rely on the version control system to fall back to older versions), but they still have the same need to understand and document the code. And the commit log is both “too far away” from the code and “out of it’s purpose” for this task: the commit log shouldn’t document what the code is supposed to do, only when something was implemented to behave in a particular way.

This is where a development (unit) test suite comes into the picture: you document every required behaviour in tests. With every change to the code, you also update the test. As a developer, you can now look into your test suite to see what the code is supposed to do. Now developers will likely become much more confident with their changes, because they can run the tests and see what happens (hopefully next to immediately) without requiring time- and resource-consuming manual tests.

But what about documenting the changes to the code? Well, you should simply document any changes in the commit message of your version control system, because it’s now no longer necessary to keep the entire version history in mind to understand what the current code state is supposed to do. You have the tests that tell you what the code should do. The commit log now only serves the purpose of documenting what has changed over time and is no longer required to understand what the code should do. So you don’t have to keep the clutter in your code, resulting in much cleaner source code files.

Summary: Taken together, the whole of version control and testing adds up to more than a simple addition of their own values.


Testing and terminology confusion

I’ve become quite addicted to writing tests during my development tasks. I’ve had wanted to dig into test-driven development for quite some time, but it was the seamless integration of Test::Unit, Ruby’s unit testing module, in Eclipse that got me going initially. I then did some unit testing with Common Lisp packages and am currently heavily using pyunit and python doctests (mostly in the context of zope testing). Writing tests has become my second development nature: It gives you that warm fuzzy feeling that you have that little safety net while modifying code.

However, there are times when terminology comes along and gives you a headache. A terminology I’ve learned about during the last year is the difference between unit testing, integration tests and functional tests (for an overview see wikipedia on software testing). But as you can see for instance in this article on integration tests in Rails, it’s not always easy to agree on what means what — Jamis and/or the Rails community seem to have the integration/functional distinction entirely backwards from what, for instance, the Zope community (on testing) thinks.

Now, one might argue that terminology doesn’t matter much given that you do write tests at all, but it’s not so easy. For instance, if your “unit test” of a given class requires another class, is that still unit testing or is it integration testing? Does it even make sense to talk about unit-testing a class? A class on its own isn’t that interesting after all, it’s its integration and interoperation with collaborateurs were the semantics of a class and its methods become interesting. Hence, shouldn’t you rather test a specific behaviour, which probably involves those other classes? And what now, if your code only makes sense when run on top of a specific framework (Zope, Rails, you name it)? Michael Feathers argues convincingly in his set of unit testing rules that any such tests are probably something else.

Ultimately these questions directly pertain to two aspects: code granularity and code dependencies — and remember, test code is code after all. These are directly related, of course: if your code is very fine-grained, it’s much more likely that it will also be much more entangled (although the dependency might be abstracted with the help of interfaces or some such, you still have the dependency as such). And as a consequence, your test code will have to mimick these dependencies. On the contrary, if your code blocks are more coarse-grained (i.e. cover a greater aspect of funcionality), you might have less (inter-)dependencies, but you won’t be able to test functionality on a more fine-grained level. As Martin Fowlers excellent article Mocks aren’t stubs discusses in detail, one way to loosen these connections between code and tests is to use mock objects or stubs. Fowlers article also made clear to me that I’ve used the term “mock object” wrongly in my post on mock objects in Common Lisp: dynamically injecting an object/function/method (as a replacement for a collaborator required for the “code under test”) that returns an expected value means using a stub, not a mock — another sign of not clearly enough defined terminology (btw, the terminology Fowler is using is that of G. Mercezaos xunit patterns book).

It’s worth keeping these things apart because of their different impact on test behaviour: mocks will force you to think about behaviour whereas stubs focus on ‘results’ of code calls (or object state if you think in terms of objects being substituted). As a result, when you change the behaviour of the code under test (say you’re changing code paths in order to optimize code blocks) this might (mocks) or might not (stubs) result in changes to the test code.

It’s also worth thinking about mocks and stubs because they also shed a new light on the question of test granularity: when you’re substituting real objects in either way, you’re on your way to much more fine-grained tests, which implies that you loosen the dependency of your tests: You can now modify the code of your collaborateur class without the test for your code under test breaking. Which brings us back full circle to the distinction between unit tests and integration tests: you now might have perfect unit tests, but now you’re forced to additionally tests the integration of all the bits and pieces. Otherwise you might have all unit tests succeed but your integrated code still fails. Given this relationship, it seems immediately clear that 100% test coverage might not be the most important issue with unit tests: you might have 100% unit test success, but 100% integration failure at the same time — if you don’t do continuous integration and integration tests, of course. Now what’s interesting is that it might be possible to check test coverage on code paths, but it might not be easy to check integration coverage. I would be interested to learn about tools detailing such information.

Recently I had another aha moment with regard to testing terminology: Kevlin Henney’s presentation at this years german conference on object oriented programming, the OOP 2009, on know your units: TDD, DDT, POUTing and GUTs: tdd is test driven development, of course. The other ones might be not so obvious: “guts” are just good unit tests and “pout” is “plain old unit testing”. I saw myself doing tdd, but come to think of it, I’m mostly applying a combination of tdd, pout (after the fact testing) and ddt: defect driven testing. I find the introduction of a term for testing after the code has been written interesting because it provides a way to talk about how to introduce testing in the first place. Especially defect driven testing, the idea to write a test to pinpoint and overcome an erroneous code path, might be a very powerful way to introduce the habit of regularly writing (some) tests for an existing large code base. So you avoid the pitfall of never being able to test “all this lots of code because there is never the time for it” and you might also motivate people to try writing test before code. And on this level, it might at first not be that relevant to make the distinction between integration and unit tests to clear: start out with whatever is useful.


Unit tests with mockups in Lisp

One of the bigger practical problems with unit testing is isolating the test coverage. Say, you want to test a piece of code from the middle (business) layer. Let’s assume further the piece of code under consideration makes some calls to lower level code to retrieve some data. The problem of test coverage isolation is now that if you “simply” call your function, you are implicitly also testing the lower level code, which you shouldn’t: if that lower level code gets modified in an incorrect way, you would suddenly see your middle level code fail although there was no change made to it. Let’s explore ways to avoid the problems in Common Lisp.

There is a very good reason why you would also want to have such test dependencies to ensure your middle level code still works if the lower level code is extended or modified. But that is no longer unit testing: you are then doing so-called integration tests which are related, but still different beasts.

Now, I was facing exactly the typical dreaded situation: I extended an application right above the database access layer which had not seen much tests yet. And of course, I didn’t want to go the long way (which I will eventually have to go anyway) and set up a test database with test data, write setup and tear-down code for the db etc. The typical suggestion (for the xUnit crowd) is to use mock objects which brings us finally on topic. I was wondering if there are any frameworks for testing with mock objects in Lisp, but a quick search didn’t turn up any results (please correct me if I’ve missed something). After giving the issue a little thought, it seemed quite clear why there aren’t any: probably because it’s easy enough to use home-grown solutions such as mine. I’ll use xlunit as the test framework, but that’s not relevant. Let’s look at some sample code we’ll want to test:

[geshi lang=lisp] (defun compare-data (data &connection) (let ((dbdata (retrieve-data-by-id (id data)))) (when (equal (some-value data) (some-db-specific-value dbdata)) t))) [/geshi] The issue is with retrieve-data-by-id which is our interface to the lower level database access.
And note that we’ll use some special functions on the results, too, even if they may just be accessors.
Let’s assume the following test code: [geshi lang=lisp] (use-package :xlunit)

(defclass comp-data-tc (test-case) ((testdata :accessor testdata :initform (make-test-data))))

(def-test-method comp-data-test ((tc comp-data-tc)) (let ((result (compare-data (testdata tc)))) (assert-equal result t))) [/geshi]

Now the trouble is: given the code as it is now, the only way to succeed the test is to make sure that make-test-data returns an object whose values match values in the database you’re going to use when compare-data get’s called. You’re ultimately tying your test code (especially the result of make-test-data) to a particular state of a particular database, which is clearly unfortunate. To overcome that problem, we’ll use mock objects and mock functions. Let’s define a mock-object mock-data and a mock-retrieve-data function, which will simply return a single default mock object.

[geshi lang=lisp] (defclass mock-data () ((id :accessor id :initarg :id :initform 0) (val :accessor some-db-specific-value :initarg :val :initform “foo-0”))))

(defun mock-retrieve-data (testcase) (format t “Establish mock for retrieve-data”) (lambda (id) (format t “mock retrieve-data id:~A~%”) (find-if #’(lambda (elem (when (equal (id elem) id) elem)) (testdbdata testcase)))) [/geshi]

Why that mock-retrieve-data returns a closure will become clear in a second, after we’ve answered the question how these entirely different named object and function can be of any help. The answer lies in CLs facility to assign different values (or better said) definitions to variables (or better said to function slots of symbols). What we’ll do is to simply assign the function definition we’ve just created as the function to use when retrieve-data is going to be called. This happens in the setup code of the test case:

[geshi lang=lisp] (defclass comp-data-tc (test-case) ((testdata :accessor testdata :initform (make-test-data)) (testdbdata :accessor testdbdata) (func :accessor old-retrieve-func)))

(defmethod set-up ((tc comp-data-tc)) ; set up some test data (dotimes (number 9) (setf (testdbdata tc) (append (list (make-instance ‘mock-data :id number :value (concatenate ‘string “value-” number))) (testdbdata tc)))) ; establish our mock function (when (fboundp ‘retrieve-data) (setf (old-retrieve-func tc) (fdefinition ‘retrieve-data)))) (setf (fdefinition ‘retrieve-data) (mock-retrieve-data tc)))

(defmethod tear-down ((tc comp-data-tc)) ; After the test has run, re-establish the old definition (when (old-retrieve-func tc) (setf (fdefinition ‘retrieve-data) (old-retrieve-func tc)))) [/geshi]

You can now see why mock-retrieve-data returns a closure: by this way, we can hand the data we establish for the test case down to the mock function without resorting to global variables.

Now, the accessor fdefinition comes in extremely handy here: we use it to assign a different function definition to the symbol retrieve-data which will then be called during the unit-test of compare-data.

..Establish mock for retrieve-data
mock retrieve-data id: 0
F
Time: 0.013


There was 1 failure: ...

There is also symbol-function which could be applied similarly and which might be used to tackle macros and special operators. However, the nice picture isn’t as complete as one would like it: methods aren’t covered, for instance. And it probably also won’t work if the function to mock is used inside a macro. There are probably many more edge cases not covered by the simple approach outlined above. Perhaps lispers smarter than me have found easy solutions for these, too, in which case I would like to learn more about them.


Page 1 of 1, totaling 4 entries