Mar 29

This has been a rather unpleasant month (don’t ask, I won’t tell) but right now I’ll look forward toward its end because of two reasons: for one, I’ll be in Hamburg for the European Lisp Symposium for the next two days; the program for the ELS has also been published in between. I’m really looking forward to an interesting set of talks. For another, some patches to CL-SQL which add support for autoincrement behaviour for Postgresql, are probably going to be released soon. To clarify, “autoincrement” is a column constraint in MySQL (among others) that automatically increments the value of the column when a new row is inserted when no value for the autoincrement column is given (cf. MySQL docs on AUTOINCREMENT), a behaviour that Postgresql supports with the serial constraint (cf. this wikibook on converting between MySQL and Postgres). Actually, that has been my first substantial amount of Common Lisp programming in the last two years, which has been triggered by an upgrade of my Debian system. This upgrade implied that an old application of mine would now use CL-SQL version 5.0 which in turn broke the app: I had simply specified a db-type of “serial” previously, but the new CL-SQL code wouldn’t recognize that it had to fetch the automatically generated value from the DB when inserting a new record. More details on the patches can be found on the CL-SQL mailing list.

The developement of this addition was also the first time I had a real-world setup developing with git. In my own projects I use mercurial, so I was eager to learn a little bit more about the differences. It’s funny that a recent opinonated article “Why I like mercurial better than git” more or less talks only about the one point that I found confusing: branch handling. For more background information, I suggest reading this article “A guide to branching in mercurial”. Basically, in my current projects where I use mercurial, I’m using the “branching with clones” approach Steve is describing there. When working on the patches for CL-SQL, I was working on the existing autoincrement branch but when I was through I wanted to port my patches to the master branch. When using mercurial with the described approach, selecting (pulling or pushing) my patches and only my patches to the master branch is dead easy: you just issue a pull/push command restricted to the “right” changesets. Doing this is even supported by Subversion these days via svn cherry picking. Looking at the docs for git pull, fetch and merge, I wasn’t able to figure out what the corresponding “right” incantation for git might look like, if there is one at all. As I didn’t want to hose my “working copy” (sorry for the SVN term again), I resorted to git format-patch, git am resp., which worked fine. Please note that I’m not suggesting that it’s not possible with another approach, quite to the contrary I would be happy to learn about it. One thing that I found rather useful is git’s stash command which let’s you safely abandon your current work to fall back to the last commited version, in order to be able to work on something that popped up in between (typically a minor unrelated problem you encounter while working on a larger piece of changes). I understand that mercurials patch queues enable a similar functionality, but I haven’t used them sofar. Another thing that I found very useful is git’s very easy way to correct (or in git terminology “amend”) a commit by just issuing “git commit -a”. I also like the idea of the “index” or more exactly that you have to explictly “add” which changes you want to commit. A similar behaviour is possible with SVN “changelist” command, but the mere existance of a changelist is not automatically honoured by SVN’s commit.

Posted by Holger Schauer

Defined tags for this entry: , ,
Sep 22

For the holidays I finally bought Peter Seibels Coders at work, which is a very unusable book about programming: it consists solely of interviews with pretty well known programmers or “coders”. It’s an interesting constellation: On the one hand, Peter Seibel is well known in the Common Lisp community for his book Practical Common Lisp which gives a modern view on Lisp: not only is it an introduction to the language but also to several libraries and the general setting of modern lisp programming. On the other (fifteen) hands, there are people like Jamie Zawinski (XEmacs, Netscape), Don Knuth (TeX, Art of Computer Programming), Guy Steele (Lisp, Scheme, Java), Peter Norvig (PAIP, Google), Brendan Eich (Javascript) and Ken Thompson (Unix) — just to note the ones that are probably the most well known.

I’ve had resisted the urge to buy the book because I’ve always felt that programming is a craft that ultimately forces you to make your own experiences. I mean, you can read all the books you like but ultimately you have to make your own hands dirty to really get knowledge about the issues involved. So, what could I learn from other peoples experiences? On the other hand, as a lightweight (in terms of reading attention) holiday book it seemed about right, so I finally gave in.

Well, the book turned out to be a real page turner for me. It’s a fascinating read because of the re-occuring topics Seibel is addressing and the various opinions he got. He addresses topics you would expect like preferred tools (e.g. editor), worst bugs, debugging techniques, asssertions and verification, literate programming (which suprises me a little), design approaches and team work, but of course the main focus is the personal experiences and how they wound up with whatever made these guys known. One thing that I liked is that Seibel has a way to ask good follow-up questions to the responses he gets, without ever letting his own experiences or opinions getting in the way, which I can imagine has probably made for pleasant interview situations (at least I take away that impression). I wouldn’t have imagined beforehand that I would find the different stories how the guys (and one woman) got into coding so interesting. There are very few people in this book whose experience doesn’t go back to teletype and time sharing systems. Of course, as a result these stories tend to be similar, but the details differ enough that’s it doesn’t get too boring. Starting with computers in the early 80s, I don’t have any experiences with such systems and which I frankly don’t miss at all after reading more about it. But just to get to this conclusion is interesting: the constant comparison with your own experiences and opinions you can’t help but make while reading this book alone is worth buying and reading it.

Over all, it’s hard to say which interviews I found the most interesting one, essentially each has some unique point or other. That being said, the interviews with Joe Armstrong and Guy Steele made a lot of impresssion on me, whereas I’m a little disappointed by the one with Peter Norvig (though he had the funniest quotes), but I can’t really nail down why. I didn’t particular like the interview with Brad Fitzpatrick, it didn’t seem to contain as much information as the others. And Joshua Bloch seemed to hype Java all the time which I found not very convincing — the idea that todays larger context for programming contains quite a few different languages and approaches seems to elude him.

There are some points I took away from this book: For one, most of the interviewed people seem to be much more concerned with data types than I am, even the ones who have done extensiv work on dynamically or weakly typed languages. I guess I should really take a closer look at that topic and, to make it more concrete, play around with e.g. Haskell. Another point is that concurrency or parallel programming is a topic that (IIRC) all of the interviewees have seen as being responsible for the worst bugs they encountered and as a result are interested in newer approaches like STM. So, it might be worthwhile to look closer into such developments, for example by playing with Clojure, Erlang. or the transaction monad, if I’ll ever really play around with Haskell. A third point is that I realized that I’m not keeping up with academic research in CS and, not having TAoCP, might never have been up to date at all. I’m following a few online references like LtU, but not closely and it’s pretty rare these days that I look deeply into some research paper. This is something else I should probably change, if time permits.

Posted by Holger Schauer

Defined tags for this entry: , ,
Nov 9

Via Lambda the Ultimate I came across an interesting article On data abstraction, revisited by William Cook, written for OOPSLA’09. It carefully dissects abstract data types from objects. All theoretical considerations aside that distinguish ADTs and objects, there is one common characteristics given by Cook: you can’t inspect the concrete representation of the data you’re abstracting. This is in itself interesting and reminded me of two rather practical things.

First of all, I was reminded of a section in Bob Martins Clean code development which discussed the idea that you should on the one hand follow the rule “Tell, don’t ask” and on the other hand have data access objects that don’t have much, if any behaviour besides providing data. This is obviously directly related to Cooks article: if you want data abstraction, you shouldn’t really provide any way to allow other objects/methods to access the internal representation. This somewhat also forbids getters as this is likely to lead to leaky abstraction, since more often than not programmers simply return the value of some data field, directly exposing the representation chosen. Now, please note that this does not necessarily follow from Cooks article, as it is possible to design getters in such a way that you can return whatever you want for a getter method, i.e., you can return a desired return type or an object satisfying a particular interface. For me, the relevant point here is the way of thinking about the kind of object at hand: do I want some behaviour (aka Cooks objects) or do I want a data sink. In the former case, and in line with what is suggested in the clean code book, it is arguably the best way to tell the object to do what is necessary rather than to inspect (get) the data it holds and do it externally in some other object/method. But even in the latter case, I think it is important to give great attention to hiding the internal representation from external access and to only allow very focussed access to the data itself. It could and has been argued that restricting the access to the stored data via getter methods is tedious (see e.g. the discussion in getters/setters/fuxors) and that allowing public access to members is allright, but looking at the issue from a data abstraction point of view it simply boils down to the question whether you want or need data abstraction or not.

Second, I’ve recently seen these two postings on the merits of the Zope Component Architecture: The emperors new clothes and the reply The success of the ZCA. Malthe asks why one should use the ZCA to override the use of a particular implementation with another instead of using some kind of reloading (or rather says that the latter is the preferrable approach). Relating this to Cooks article, Malthe could be paraphrased roughly as: we have ADTs all over the place and we only should allow only one implementation per ADT (this is what the type system would guarantee in other systems). If you want another implementation (of some interface, as Cook shows for his objects), you should reload the object defintion with the one you want. The use of the ZCA, however, is directly related to the very idea of object oriented programming in the way Cook defines it: you only have interfaces that are the relevant defining characteristics of objects (values) and hence, the use of the ZCA is the way to deal with multiple implentations in Zope (or Python). For me, all I can say is that I’m happy that the ZCA and hence the ability to easily intermingle multiple implementations is there (then again, with me reading computer science theoretic articles I’m arguably not of the angry web designer type whose benefit Malthe is arguing for).

There is another, more puzzling aspect of the article to me. After some considerations, I have to conclude that of all OO languages I happen to know, it’s really only Java that seems to be object oriented in Cooks view of the world. This is because in Java, you can define a method to return objects satisfying an interface. In addition, in dynamically typed languages like Python, Ruby, or CLOS, you could try to come away with duck typing, but it’s arguably only Python which tries to take it to the heart (for instance in CLOS, most values you’re gonna deal with are non-CLOS values and you even have an ETYPECASE statement, which is a switch-statement on type distinction). Funny enough, Cook finishes his Smalltalk analysis with the statement that “one conclusion you could draw from this analysis is that the untyped lambda calculus was the first object-oriented language”. But besides the point how some language is “more OO” than another, there is also to the point that in order to program truly object-oriented, you shouldn’t (and in Cooks world really can’t) rely on type checks, because the whole point of using objects as data abstraction is to rely on behaviour.

Posted by Holger Schauer

Defined tags for this entry: , ,
Jul 24

What have version control and testing to do with each other? Well, first of all, both are common virtues in the clean code community. What you’ll find is that both virtues are important on their own ground: version control provides a safety guard in that you can roll back to prior versions if you accidently introduce problems in your code. Testing (automated unit tests) provides a safety guard, too, because you can do regression testing when you work with your code. These are both fine goals but seemingly have little to do with each other.

But in reality they do. For sake of argument, let’s take a step back and assume that you have to work in an environment of several developers where neither of these things exists. What will you likely see? What we all have seen several years ago. Commented out code blocks, redundant and often misleading or outdated comments, timestamps with comments cluttered all over the code. And frightened developers that feared each minor change because of the miriad of subtle side effects it might have, let alone major changes to core components. It’s an environment in which refactorings as welll as extensions are very hard and expensive, which results in frightened overworked developers and frustrated managers.

So, what happens when you introduce only one of those virtues? Say, we introduce version control. Now, every change gets documented, except that documenting every change requires, from the developers point of view, documentation at the wrong point. They can’t see the documented changes and the reason for these changes in the source, they see it only in the version control system — iff they add a change message with every change at all. Much more likely is that you will see commit messages such as “.” or “bug fix”, and the same old mess of timestamps, outdated comments and commented out code as before. Why is that? Because your developers are now not as frightened as they used to be (because they can now rely on the version control system to fall back to older versions), but they still have the same need to understand and document the code. And the commit log is both “too far away” from the code and “out of it’s purpose” for this task: the commit log shouldn’t document what the code is supposed to do, only when something was implemented to behave in a particular way.

This is where a development (unit) test suite comes into the picture: you document every required behaviour in tests. With every change to the code, you also update the test. As a developer, you can now look into your test suite to see what the code is supposed to do. Now developers will likely become much more confident with their changes, because they can run the tests and see what happens (hopefully next to immediately) without requiring time- and resource-consuming manual tests.

But what about documenting the changes to the code? Well, you should simply document any changes in the commit message of your version control system, because it’s now no longer necessary to keep the entire version history in mind to understand what the current code state is supposed to do. You have the tests that tell you what the code should do. The commit log now only serves the purpose of documenting what has changed over time and is no longer required to understand what the code should do. So you don’t have to keep the clutter in your code, resulting in much cleaner source code files.

Summary: Taken together, the whole of version control and testing adds up to more than a simple addition of their own values.

Posted by Holger Schauer

Defined tags for this entry: ,
Feb 6

I’ve become quite addicted to writing tests during my development tasks. I’ve had wanted to dig into test-driven development for quite some time, but it was the seamless integration of Test::Unit, Ruby’s unit testing module, in Eclipse that got me going initially. I then did some unit testing with Common Lisp packages and am currently heavily using pyunit and python doctests (mostly in the context of zope testing). Writing tests has become my second development nature: It gives you that warm fuzzy feeling that you have that little safety net while modifying code.

However, there are times when terminology comes along and gives you a headache. A terminology I’ve learned about during the last year is the difference between unit testing, integration tests and functional tests (for an overview see wikipedia on software testing). But as you can see for instance in this article on integration tests in Rails, it’s not always easy to agree on what means what — Jamis and/or the Rails community seem to have the integration/functional distinction entirely backwards from what, for instance, the Zope community (on testing) thinks.

Now, one might argue that terminology doesn’t matter much given that you do write tests at all, but it’s not so easy. For instance, if your “unit test” of a given class requires another class, is that still unit testing or is it integration testing? Does it even make sense to talk about unit-testing a class? A class on its own isn’t that interesting after all, it’s its integration and interoperation with collaborateurs were the semantics of a class and its methods become interesting. Hence, shouldn’t you rather test a specific behaviour, which probably involves those other classes? And what now, if your code only makes sense when run on top of a specific framework (Zope, Rails, you name it)? Michael Feathers argues convincingly in his set of unit testing rules that any such tests are probably something else.

Ultimately these questions directly pertain to two aspects: code granularity and code dependencies — and remember, test code is code after all. These are directly related, of course: if your code is very fine-grained, it’s much more likely that it will also be much more entangled (although the dependency might be abstracted with the help of interfaces or some such, you still have the dependency as such). And as a consequence, your test code will have to mimick these dependencies. On the contrary, if your code blocks are more coarse-grained (i.e. cover a greater aspect of funcionality), you might have less (inter-)dependencies, but you won’t be able to test functionality on a more fine-grained level. As Martin Fowlers excellent article Mocks aren’t stubs discusses in detail, one way to loosen these connections between code and tests is to use mock objects or stubs. Fowlers article also made clear to me that I’ve used the term “mock object” wrongly in my post on mock objects in Common Lisp: dynamically injecting an object/function/method (as a replacement for a collaborator required for the “code under test”) that returns an expected value means using a stub, not a mock — another sign of not clearly enough defined terminology (btw, the terminology Fowler is using is that of G. Mercezaos xunit patterns book).

It’s worth keeping these things apart because of their different impact on test behaviour: mocks will force you to think about behaviour whereas stubs focus on ‘results’ of code calls (or object state if you think in terms of objects being substituted). As a result, when you change the behaviour of the code under test (say you’re changing code paths in order to optimize code blocks) this might (mocks) or might not (stubs) result in changes to the test code.

It’s also worth thinking about mocks and stubs because they also shed a new light on the question of test granularity: when you’re substituting real objects in either way, you’re on your way to much more fine-grained tests, which implies that you loosen the dependency of your tests: You can now modify the code of your collaborateur class without the test for your code under test breaking. Which brings us back full circle to the distinction between unit tests and integration tests: you now might have perfect unit tests, but now you’re forced to additionally tests the integration of all the bits and pieces. Otherwise you might have all unit tests succeed but your integrated code still fails. Given this relationship, it seems immediately clear that 100% test coverage might not be the most important issue with unit tests: you might have 100% unit test success, but 100% integration failure at the same time — if you don’t do continuous integration and integration tests, of course. Now what’s interesting is that it might be possible to check test coverage on code paths, but it might not be easy to check integration coverage. I would be interested to learn about tools detailing such information.

Recently I had another aha moment with regard to testing terminology: Kevlin Henney’s presentation at this years german conference on object oriented programming, the OOP 2009, on know your units: TDD, DDT, POUTing and GUTs: tdd is test driven development, of course. The other ones might be not so obvious: “guts” are just good unit tests and “pout” is “plain old unit testing”. I saw myself doing tdd, but come to think of it, I’m mostly applying a combination of tdd, pout (after the fact testing) and ddt: defect driven testing. I find the introduction of a term for testing after the code has been written interesting because it provides a way to talk about how to introduce testing in the first place. Especially defect driven testing, the idea to write a test to pinpoint and overcome an erroneous code path, might be a very powerful way to introduce the habit of regularly writing (some) tests for an existing large code base. So you avoid the pitfall of never being able to test “all this lots of code because there is never the time for it” and you might also motivate people to try writing test before code. And on this level, it might at first not be that relevant to make the distinction between integration and unit tests to clear: start out with whatever is useful.

Posted by Holger Schauer

Defined tags for this entry: ,
Dec 10

I’ve become a python programmer, too, lately, due to a job change. Python is a fine language so far, although to me it’s mostly just like Ruby, though with even less functional flavour. However, just as with Ruby, I’m really missing slime, the superior lisp interaction mode for Emacs, when hacking python code. I could now start to write down a list of things I’m missing (which I’ve intended to do), however, Andy Wingo spares me the hassle, as he has just written an excellent article on slime from a python programmers view.

However, I would like to elaborate a little on the main difference for me: the client/server socket approach of slime. Let me briefly recapulate what this implies: slime consists of two parts, a client written in Emacs lisp and a server written in Common Lisp (AFAIK there is at least also an implementation for clojure, maybe also one for some scheme implementation). In order to use slime in it’s full glory, it’s hence required that you have a common lisp process running which in turn runs the slime server part. If you now fire up slime, you’ll get an interaction buffer over which you can access the REPL of the lisp process, which in python would be the interpreter prompt. You can then interact with the lisp process, evaluating pieces of code from your lisp source code buffer directly in the connected lisp process. What is incredibly useful for me is that you can not only start a new lisp process but also connect to an already running lisp process, given that it has the slime server started (this is obviously mainly useful if the lisp implementation you use has multi-threading capabilities). I use it to connect to a running web server application, which I can then inspect, debug and modify. Modification includes redefinition of functions, macros and classes, which of course is also a particular highlight of Common Lisp. I would like to cite a comment of the reddit user “fionbio” he made wrt. to the linked article: In fact, Python language wasn’t designed with lisp-style interactive development in mind. In CL, you can redefine (nearly) anything you want in a running program, and it just does the right thing. In Python, there are some problems, e.g. instances aren’t updated if you modify their class. Lisp programmers often, though not always, refer to various things (functions, classes, macros, etc.) using symbols, while Python programs usually operate with direct references, so when you update parts of your program you have much higher chances that there will be a lot of references to obsolete stuff around.

To complement Bill clementsons excellent article series on slime a little, I’m going to describe how I’m using/configuring python-mode to make it match my expectations a little closer. Essentially I would like to access my python process just as I would with slime/Common Lisp, but that’s not possible. The reason, btw., is nearly unchanged: I need to code on a web server app (written in Zope) which may not even run on the same machine I’m developing on. Let’s first cover the simple stuff: To enable a reasonable command interface to the python interpreter, I require the ipython emacs library. If the python interpreter runs locally, I also use py-complete, so that I can complete my code at least a little. Unfortunately, this breaks when the python interpreter doesn’t run locally, because the py-complete needs to setup some things in the running python process, which it does by writing to a local temp file and feeding it to the python process. Unfortunately, the code in py-complete lacks customizability, i.e., you can’t specify where that temp file should be located — I should be able to come up with a small patch in the near future, which I will add below. Finally, I also require doctest-mode as a support for writing doctests, but that’s not really relevant.

Now, on to the more involved stuff: I introduce some new variables and a new function py-set-remote-python-environment, which uses the those variables to do a remote call (via ssh) to python. This at least allows me to do things like setting py-remote-python-command to “/home/schauer/zope/foo-project/bin/instance” and py-remote-python-command-args to “debug”, so that I can access a remote debug shell of my current zope product. That alone will only allow me to fire up and access the remote python, so I could now develop the code locally, having it executed remote. More typical though is that you would also want to keep the code on the remote machine, too: for this I use tramp, a package for remotely accessing files/directories from within emacs. In combination, this allows me to edit and execute the code on the remote machine. It is still nowhere near what is possible with slime, but at least it allows me to persue my habit of incremental and interactive development from within my usual emacs installation (i.e., it doesn’t require me to deal with any Emacs related hassle on the remote machine).


;;; python-stuff.el --- python specific configuration

(when (locate-library "ipython")
  (require 'ipython))

(when (locate-library "doctest-mode")
  (require 'doctest-mode))

(defvar py-remote-connect-command "ssh""*Command for connecting to a remote python, typically \"ssh\"")
(defvar py-remote-connection-args '("user@remotemachine")"*List of strings of connection options.")
(defvar py-remote-python-command "python""*Command to execute for python")
(defvar py-remote-python-command-args '("-i")"*List of strings arguments to be passed to `py-remote-python-command`.")
(defvar py-remote-python-used nil"Remember if remote python is used.")

(defun py-set-remote-python-environment ()
  (interactive)
  (let ((command-args (append py-remote-connection-args 
                  (list py-remote-python-command)
                  py-remote-python-command-args)))
    (setq py-python-command py-remote-connect-command)
    (setq py-python-command-args command-args))
  (setq py-remote-python-used t))

(when (locate-library "py-complete")
  (autoload 'py-complete-init "py-complete")
  (defun my-py-complete-init ()"Init py-complete only if we're not using remote python"
    (if (not py-remote-python-used)
        (py-complete-init)))
  (add-hook 'python-mode-hook 'my-py-complete-init))

Posted by Holger Schauer

Defined tags for this entry: ,
Nov 11

One of the bigger practical problems with unit testing is isolating the test coverage. Say, you want to test a piece of code from the middle (business) layer. Let’s assume further the piece of code under consideration makes some calls to lower level code to retrieve some data. The problem of test coverage isolation is now that if you “simply” call your function, you are implicitly also testing the lower level code, which you shouldn’t: if that lower level code gets modified in an incorrect way, you would suddenly see your middle level code fail although there was no change made to it. Let’s explore ways to avoid the problems in Common Lisp.

There is a very good reason why you would also want to have such test dependencies to ensure your middle level code still works if the lower level code is extended or modified. But that is no longer unit testing: you are then doing so-called integration tests which are related, but still different beasts.

Now, I was facing exactly the typical dreaded situation: I extended an application right above the database access layer which had not seen much tests yet. And of course, I didn’t want to go the long way (which I will eventually have to go anyway) and set up a test database with test data, write setup and tear-down code for the db etc. The typical suggestion (for the xUnit crowd) is to use mock objects which brings us finally on topic. I was wondering if there are any frameworks for testing with mock objects in Lisp, but a quick search didn’t turn up any results (please correct me if I’ve missed something). After giving the issue a little thought, it seemed quite clear why there aren’t any: probably because it’s easy enough to use home-grown solutions such as mine. I’ll use xlunit as the test framework, but that’s not relevant. Let’s look at some sample code we’ll want to test:

[geshi lang=lisp] (defun compare-data (data &connection) (let ((dbdata (retrieve-data-by-id (id data)))) (when (equal (some-value data) (some-db-specific-value dbdata)) t))) [/geshi] The issue is with retrieve-data-by-id which is our interface to the lower level database access.
And note that we’ll use some special functions on the results, too, even if they may just be accessors.
Let’s assume the following test code: [geshi lang=lisp] (use-package :xlunit)

(defclass comp-data-tc (test-case) ((testdata :accessor testdata :initform (make-test-data))))

(def-test-method comp-data-test ((tc comp-data-tc)) (let ((result (compare-data (testdata tc)))) (assert-equal result t))) [/geshi]

Now the trouble is: given the code as it is now, the only way to succeed the test is to make sure that make-test-data returns an object whose values match values in the database you’re going to use when compare-data get’s called. You’re ultimately tying your test code (especially the result of make-test-data) to a particular state of a particular database, which is clearly unfortunate. To overcome that problem, we’ll use mock objects and mock functions. Let’s define a mock-object mock-data and a mock-retrieve-data function, which will simply return a single default mock object.

[geshi lang=lisp] (defclass mock-data () ((id :accessor id :initarg :id :initform 0) (val :accessor some-db-specific-value :initarg :val :initform “foo-0”))))

(defun mock-retrieve-data (testcase) (format t “Establish mock for retrieve-data”) (lambda (id) (format t “mock retrieve-data id:~A~%”) (find-if #’(lambda (elem (when (equal (id elem) id) elem)) (testdbdata testcase)))) [/geshi]

Why that mock-retrieve-data returns a closure will become clear in a second, after we’ve answered the question how these entirely different named object and function can be of any help. The answer lies in CLs facility to assign different values (or better said) definitions to variables (or better said to function slots of symbols). What we’ll do is to simply assign the function definition we’ve just created as the function to use when retrieve-data is going to be called. This happens in the setup code of the test case:

[geshi lang=lisp] (defclass comp-data-tc (test-case) ((testdata :accessor testdata :initform (make-test-data)) (testdbdata :accessor testdbdata) (func :accessor old-retrieve-func)))

(defmethod set-up ((tc comp-data-tc)) ; set up some test data (dotimes (number 9) (setf (testdbdata tc) (append (list (make-instance ‘mock-data :id number :value (concatenate ‘string “value-” number))) (testdbdata tc)))) ; establish our mock function (when (fboundp ‘retrieve-data) (setf (old-retrieve-func tc) (fdefinition ‘retrieve-data)))) (setf (fdefinition ‘retrieve-data) (mock-retrieve-data tc)))

(defmethod tear-down ((tc comp-data-tc)) ; After the test has run, re-establish the old definition (when (old-retrieve-func tc) (setf (fdefinition ‘retrieve-data) (old-retrieve-func tc)))) [/geshi]

You can now see why mock-retrieve-data returns a closure: by this way, we can hand the data we establish for the test case down to the mock function without resorting to global variables.

Now, the accessor fdefinition comes in extremely handy here: we use it to assign a different function definition to the symbol retrieve-data which will then be called during the unit-test of compare-data.

..Establish mock for retrieve-data
mock retrieve-data id: 0
F
Time: 0.013

There was 1 failure: ...

There is also symbol-function which could be applied similarly and which might be used to tackle macros and special operators. However, the nice picture isn’t as complete as one would like it: methods aren’t covered, for instance. And it probably also won’t work if the function to mock is used inside a macro. There are probably many more edge cases not covered by the simple approach outlined above. Perhaps lispers smarter than me have found easy solutions for these, too, in which case I would like to learn more about them.

Posted by Holger Schauer

Defined tags for this entry: ,
Feb 22

This post is mainly a reference post about a particular topic whose solution wasn’t immediately obvious to me from the docs to CL-SQL. Using CL-SQL with (enable-sql-reader-syntax), I had written a routine that looks basically likes this:

[geshi lang=lisp] (defun data-by-some-criteria (criteria &key (dbspec +db-spec+) (dbtype +db-type+)) (with-database (db dbspec :database-type dbtype :if-exists :old) (let (dbresult) (if criteria (setq dbresult (select ‘some-model ‘other-model :where [and [= [some.criteria] criteria] [= [some.foreignid] [other.id]]] :order-by ‘([other.name] [some.foreignid] [year] [some.name]) :database db)) (setq dbresult (select ‘some-model ‘other-model :where [and [null [some.criteria]] [= [some.foreignid] [other.id]]] :order-by ‘([other.name] [some.foreignid] [year] [some.name]) :database db)) (when dbresult (loop for (some other) in dbresult collect some))))) [/geshi]

This is ugly because the only difference between those two select statements is the check for the criteria, but I had no idea how to combine the two select statements into one, because it’s not possible to embed lisp code (apart from symbols) into an sql-expression (i.e. the type of arguments for :where or :order etc.). With the next requirement things would become far worse: The order-by statement needs to get more flexible so that it is possible to sort results by year first. Given the approach shown above this would result in at least four select statements, which is horrible. So, naturally I wanted a single select statement with programmatically obtained :where and :order-by sql expressions.

Step 1: It occured to me that it should be possible to have the arguments in a variable and simply refer to the variable. E.g., using a more simple example: [geshi lang=lisp] (let (where-arg) (if (exact-comp-needed) (setq where-arg ‘[= [column] someval]) (setq where-arg ‘[like [column] someval])) (select ‘model :where where-arg)) [/geshi]

So I could now have my two different where-args and two different order-args and use a single select statement. Main problem solved.

Step 2: But for the :where arg in my original problem, only a small fraction of the sql-expression differs. So how do I avoid hard coding the entire value of where-arg? How can I combine some variable part of an sql-expression with some fixed parts? I.e, ultimately I want something like:

[geshi lang=lisp] (let (comp-op where-arg) (if (exact-comp-needed) (setq comp-op ‘=) (setq comp-op ‘like)) (setq where-arg ‘[ <put comp-op here> [column1] someval]) (select ‘model :where where-arg)) [/geshi]

But with CL-SQL modifying the reader, there seems to be no way to make <put comp-op here> work. I didn’t knew how to get the usual variable evaluation into the sql-expression, or how to escape from CL-SQL’s sql-reader-syntax to normal lisp evaluation.

Somewhere in the back of my head where was that itch that CL-SQL might offer some low-level access to sql expressions. And indeed it does. There are two useful functions, sql-expression and sql-operation. sql-operation “returns an SQL expression constructed from the supplied SQL operator or function operator and its arguments args” (from the cl-sql docs), and we can supply the operator and its arguments from lisp — which is exactly what I want.

Now, the nice thing is that it’s easy to mix partly handcrafted sql expressions with CL-SQL special sql syntax constructs that will be automatically handled by the reader (if you enable it only via enable-sql-reader-syntax, of course). I.e., for <put comp-op here> we can use sql-operation, but the rest stays essentially the same:

[geshi lang=lisp] (let (where-arg) (if (exact-comp-needed) (setq where-arg (sql-operation ‘= [column1] someval)) (setq where-arg (sql-operation ‘like [column1] someval))) (select ‘model ‘other-model :where where-arg)) [/geshi]

Now, coming back to my original problem, based on this approach I can split out the common part of the :where and :order arguments and combine those with the varying parts as needed and hand them down to a single select statement. Problem solved.

Posted by Holger Schauer

Defined tags for this entry:
Nov 9

Some time ago, I was looking at splitting text with Elisp, Perl, Ruby and Common Lisp. Yesterday, when I again had to do quite the same thing, it occurred to me that the Common Lisp solution was unnecessary complex/long. I’m not a Perl guru, but I believe the following is probably hard to beat even with Perl:


CL-USER> (format t "~{<li>~A</li>~%~}" (cl-ppcre:split "\|""Kim Wilde|Transvision Vamp|Ideal|Siouxsie and the Banshees|Nena|Iggy Pop"))
<li>Kim Wilde</li><li>Transvision Vamp</li><li>Ideal</li><li>Siouxsie and the Banshees</li><li>Nena</li><li>Iggy Pop</li>
NIL

For the uninitiated, it’s not the cl-ppcre library which is interesting here but the built-in iteration facilities of format. See the Hyperspec on the control-flow features of format for details. Now, I usually tend to avoid the mini languages that come with Common Lisp like the one of format or loop when writing real programs, but when using Lisp as a glorified shell they come in very handy.

Posted by Holger Schauer

Defined tags for this entry:
Oct 17

For a long time I hadn’t looked closer at those modern distributed revision control systems like Git, Darcs or Mercurial. This was mainly due to two facts: As I’m currently neither involved in any major open source project which uses these systems nor in a project at work which requires the facilities offered by such systems, and as there was no easy access for them in XEmacs, the more traditional systems like Subversion, CVS and RCS are fine for me. However, there was this nagging feeling that I might miss something and as revision systems always have been somewhat of a pet peeve of mine, I eventually spend some time reading up more on them. I’ve read quite a lot of discussions on the web, and gathered that mercurial might be worth a closer look, as it claims to be quite easy to handle, comparably well documented and quite fast. And then finally I’ve read on xemacs-beta that the new vc package (in Pre-Release) would support mercurial as well.

Well, that’s where I am now: I have several pieces of code lying around which I sometimes develop on my main machine and sometimes on my laptop when moving around. This is the scenario where a server-based approach to revision control is not what you want: you won’t be able to access your server while you’re on the road and hence you can’t commit. Now, with RCS that’s not a problem, as there is no server involved. But of course, since RCS is a file-system local revision system, syncing is a major problem and you have to go to great pains to ensure you don’t overwrite changes you made locally in between syncs. I hope that a distributed version control system like mercurial will solve the problem, as I no longer have to decide which version is the current head version, instead cherry-picking change sets at will.

But of course, for this to happen, I have to convert my RCS repositories to Mercurial. This doesn’t seem to be a common problem: there are a lot of tools for conversion from CVS or Subversion (see Mercurial Wiki, e.g. Tailor for instance), but not from RCS. I ended up following the instructions given in the TWiki Mercurial Contribution page. I have some minor corrections, though, so here we go:

-1. (Step 6 in TWiki docs) Ensure all your files are checked in RCS. I won’t copy the advice from the TWiki page here, because I believe in meaningful commit messages and would urge you to do a manual check. 0. You’ll need cvs20hg and rcsparse which you will find here. You’ll need to have Python development libraries installed, i.e. Python.h. For Debian systems, this is in package python-dev. Installation is as simple as two “./setup install” as root which will install the relevant libraries and Python scripts. 1. Create a new directory for your new mercurial repository (named REPO-HG, replace that name):

    mkdir REPO-HG
2. Initialize the repository:
   hg init REPO-HG
3. (Step 4 in the TWiki document) Create a new copy of your old RCS repository (named REPO here, replace that with the name containing your old RCS files), add a CVSROOT and a config file (mistake one in the TWiki docs: As with all CVS data, the “config” file needs to go to CVSROOT, not to CVSROOT/..). Of course, if you’re no longer interested in your old data, you may omit the initial copy.
    mkdir tmp
    cp -ar REPO tmp/REPO-old
    mkdir tmp/CVSROOT
    touch tmp/CVSROOT/config
4. Inside your directory with the old RCS data, move everything out of the RCS subdirectories (mistake two in the TWiki docs: the double-quotes need to go before the asterix):
   find tmp/REPO-old -type d -name RCS -prune | while read r; do mv -i "$r"/* "$r/.."; rmdir "$r"; done
5. Run cvs20hg to copy your old repository to mercurial. If you don’t follow the directory scheme shown below, you’ll end up with your new mercurial repository missing the initial letter of the name of all top-level files and directories.
   cvs20hg tmp/REPO-old basename tmp/REPO-old REPO-HG
6. Check that everything looks like you would expect:
   cd REPO-HG
   hg log
7. If you had files in your old directory not under version control that you’ll like to keep, copy them over. This might be a good time to think about whether they are worth having them under revision control. Afterwards throw away any old directory you no longer need (i.e., your original REPO, tmp/*).

Posted by Holger Schauer

Defined tags for this entry: ,
Aug 8

For a review, I needed to get the track list of a given CD. As the track list wasn’t available via CDDB, I went to some large online store and found the tracklist. I need to convert it to XML, though. The original data I fetched looks like so:

1. Fox In A Box
2. Loaded Heart
3. All Grown Up
4. Pleasure Unit
...

whereas I need:

<li id=”1”>Fox In A Box</li> <li id=”2”>Loaded Heart</li> <li id=”3”>All Grown Up</li> …

After cutting the original data to my Emacs, writing out a simple file and using Perl for that simple transformation seemed just gross. In the past, I’ve been an Emacs hacker. But no more, or so it seems, since it took me nearly half an hour just to come up with this simple function:

(defun tracklist-to-li (point mark)"Generate a string with <li>-elements containing tracks.
Assumes that one every line of region, a track position 
and the track name is given."
  (interactive "r")
  (save-excursion 
    (goto-char point)
    (let ((current-pos (point))
      (result ""))
      (while (re-search-forward "^\([0123456789]+?\).[ \t]+\(.*\)$"
                mark t)
    (setq result
          (concat result"<li id=\""
              (match-string 1)"\">"
              (match-string 2)"</li>\n"))
    (setq current-pos (point)))
      (message result))))

What took the most time was that I’ve had forgotten to escape the grouping parenthesis in the regular expression and that it took me a little while to accept that there is really no \d or equivalent character class in Emacs regexps. Which probably means that I’ve been doing too much in Perl, sed and the like. OTOH, it just may hint at the horror of regular expressions handling in Emacs. What I also dislike is that whenever you want some result in Emacs and see it, too, you have to invoke an interactive operation like message. Of course, there is IELM, but this doesn’t really help you for interactive functions operating on regions.

And five minutes later, I realize I need to convert some string like “The (International) Noise Conspiracy|The Hi-Fives|Elastica” into a similar list structure. With a simple cut & paste and roughly 30 seconds later, I have

[bauhaus->~]perl -e '$a="The (International) Noise Conspiracy|The Hi-Fives|Elastica"; @a=split("|",$a); foreach $b  (sort @a) { print "<li>$b</li>\n"; }'
<li>"The (International) Noise Conspiracy"</li><li>"The Hi-Fives"</li><li>"Elastica"</li>

Hmm. Perhaps I’ve come quite a long way on the dark side already … On the other hand, in Ruby, this is just as simple (I’m using irb, the interactive ruby shell here):

irb(main):008:0> a="The (International) Noise Conspiracy|The Hi-Fives|Elastica"
=>"The (International) Noise Conspiracy|The Hi-Fives|Elastica"
irb(main):009:0> a.split("|").each {|string|
irb(main):010:1* print "<li>"
irb(main):011:1> print string
irb(main):012:1> print "</li>\n"
irb(main):013:1> }<li>The (International) Noise Conspiracy</li><li>The Hi-Fives</li><li>Elastica</li>
=> ["The (International) Noise Conspiracy", "The Hi-Fives", "Elastica"]

The difference here is the implicit array Ruby generates, which of course in Perl you could hide in the array position of the foreach loop. Note the annyoing misfeature of irb to always show the prompt even when your still continuing your current input line.

In Common Lisp we can do it just as short:

CL-USER> (let* ((a "The (International) Noise Conspiracy|The Hi-Fives|Elastica")
                  (splits (ppcre:split "\|" a)))
               (loop for string in splits
                  do 
                      (format t "<li>~S</li>~%" string)))
<li>"The (International) Noise Conspiracy"</li><li>"The Hi-Fives"</li><li>"Elastica"</li>
NIL

The same thing here: The result of the split could have been easily embedded in the loop.

The lesson, of course, is that in the end this example only serves to show that things that are easy to achieve in a high-level are indeed easy to achieve. Or to put it otherwise that the use of regular expressions is no more a discriminating feature between programming languages.

Posted by Holger Schauer

Defined tags for this entry:
Jul 4
Recently, there was a discussion about "the rise of functional languages" over on <a href="http://lwn.net/">Linux weekly news</a>, in which one of the participant claimed that one of the major reasons why nobody uses functional languages in industrial settings would be the lack of explicit resource handling (where a resource is some supposedly "alien" object in the system, say a database handle or something like that).  What he was referring to was the inability to run code on allocating/deallocating a piece of resource. Of course, some people pointed him to various solutions, in particular I recurred to the usual WITH-* style-macros in which one would nest the access to the data while at the same time hiding all what one would do on allocation/de-allocation. His reply went something along the lines that such objects may need to be long-lived (thus a WITH-macro is inappropriate) and that the only resort would be the garbage collector and that there simply is no way of running code at a guaranteed (de-allocation) time. I have to admit that I have no idea how I could code around that problem in Common Lisp (garbage collection even isn't a defined term in the ANSI specification of CL, and I'm very sure I haven't seen any mention of allocation/deallocation in it).

Now, some months later, there is a discussion in comp.lang.lisp on the topic of “portable finalizers” and Rainer Joswig pointed to this chapter in the Lisp machine manual which talks about explicit resource handling the lisp machine. From the excerpt, I can’t judge whether resources are first-class CLOS objects and hence the functions to handle them are generic functions, but if so that would actually allow running code on deallocating a resource, of course with the price of having to handle allocation/deallocation manually. I really wonder if any of todays CL implementations offers the same or at least similar functionality?

Posted by Holger Schauer

Defined tags for this entry:

(Page 2 of 2, totaling 27 entries)