Coding katas Clojure -- Anagrams

Kata 6 is concerned with anagrams. An anagram is a word that consists of characters which, when combined in a different order, form a different word. Now, when I started out with this kata, I was sitting on a train without internet connection, so I just went ahead with what I remembered from a quick glance over the kata description I had done the week before. So I thought that all that needed to be solved was to determine whether two words were anagrams of each other (the complete code, btw. can be found in my github repository for the Clojure codekatas). My initial idea how to solve this is to generate the sets of the characters of both words and compare those:

(defn remove-blanks [word]
         (str/replace word " " ""))

(defn anagram-set? [word1 word2]
         (let [w1 (remove-blanks word1)
            w2 (remove-blanks word2)] 
       (= (set w1) (set w2))))

This time, I opted for using midje for running the tests, in particular due to the possibility to run tests continually via lein-midje. Midje takes a slightly different approach / syntax to writing tests, adding the notion of facts that are then verified. I.e. tests with midje look like this:

(facts "Testing the set implementation for checking anagrams"
      (fact "Set anagram can find anagrams"
         (anagram-set? "the law" "wealth") => true)
     (fact "Set anagram is too simplistic"
         (anagram-set? "the lalalaw" "wealth") => true))

You can already see from the latter fact what is wrong with the initial solution: it’s too simplistic with regard to handling the number of occurences of some character. (Some might say, the introduction of remove-blanks also is too complicated, but I wanted to handle Anne Clark’s “The law is an anagram of wealth”.

Traditional solution

When I finally had some more time to read the kata description more carefully, I recognized that the task actually is to find all anagrams of a given word, checking back against a given wordlist. So that means that the kata consists of two tasks: generate all possible combinations for a given character sequence and check in this wordlist whether some candidate character sequence amounts to a known word. Now, if you take a step back, it’s easy to see that anagrams are nothing else than permutations of the elements of a given (character) sequence, with the additional restriction that all such permutations must be (known) words again. So, we end up with a skeleton which looks like this:

(defn generate-anagrams [word]
   "Generate all anagrams of word"
   (generate-permutations word))

(defn find-anagrams [word words]
   "Finds all anagrams of word in (the sequence of) words"
   (let [anagrams (generate-anagrams word)
              wordset (set words)]
     (loop [candidates anagrams
            result []]
        (if (empty? candidates)
            result
            (recur (rest candidates)
                 (if (and (not (= (first candidates) word))
                     (contains? wordset (first candidates)))
                (concat result (list (first candidates)))
                result))))))

Which now, of course leaves us with the task to implement a permutation algorithm. I must admit I had a pretty hard time to come up with something on my own without resorting to looking at other people’s code. Given that the task of code katas is not primarily to invent algorithms on the fly, but to practice coding, I finally read the wikipedia paragraph on computing permutations in lexicographic order, which has a blue print of an algorithm which is attributed to Naranya Pandita, who invented it in the 14th century already. I took a very verbatim and top-down approach this time and ended up with this as the next piece of code:

(defn next-permutation [squence]
   (when-let [k (find-largest-index-with-bigger-successor squence)]
        (let [l (find-largest-index-bigger-value squence k)
              swapped (swap-positions squence k l)
              current-perm (reverse-tail swapped (inc k))]
          current-perm)))

(defn generate-permutations [squence]
   (let [start-perm (sort squence)]
      (loop [permutation (next-permutation start-perm)
             result (list start-perm)]
         (if (or (not permutation)
             (empty? permutation))
         result
         (recur (next-permutation permutation)
                (concat result (list permutation)))))))

I went back to writing tests:

   (fact "finding the largest index with bigger successor"
         (find-largest-index-with-bigger-successor [1 2]) => 0
         (find-largest-index-with-bigger-successor [1 2 3 4]) => 2
         (find-largest-index-with-bigger-successor [1 2 4 3]) => 1
         (find-largest-index-with-bigger-successor [1 3 4 2]) => 1
         (find-largest-index-with-bigger-successor [1 4 3 2]) => 0)

The test results are taken straight out of the wikipedia article. Implementing this is pretty straight-forward: we just iterate through the list, keeping track of the current position and check whether the following element is bigger than the current element. If so, we keep the current position, otherwise we keep what we had so far as the result. When we reach the end of the sequence (or there is no subsequent to compare to), we have found the largest position (index) that has a successor with a bigger value. One thing is worth pointing out: the usage of (comp pos? compare) is necessary because > does only work on numbers, but no on characters (or keywords). Why Clojure does not follow Python (which provides a general purpose operators, which use something like compare under the hood which you can override for your data types) in this aspect is beyond me.

(defn find-largest-index-with-bigger-successor [squence]
     (loop [restsq (seq squence)
           curpos 0
                   curresult nil]
        (cond (or (empty? restsq)
                  (empty? (rest restsq)))
                curresult
             ((comp pos? compare) (second restsq) (first restsq))
               (recur (rest restsq) (inc curpos) curpos)
                        :else
               (recur (rest restsq) (inc curpos)     curresult))))

The next step is finding the position of some value that is bigger than the position that we just determined. Again this is straight-forward:

   (fact "finding the largest index that has a bigger value than some other position"
         (find-largest-index-bigger-value [1 2 3 4] 2) => 3
         (find-largest-index-bigger-value [1 2 4 3] 1) => 3
         (find-largest-index-bigger-value [1 3 4 2] 1) => 2
         (find-largest-index-bigger-value [1 4 3 2] 0) => 3)

   (defn find-largest-index-bigger-value [squence index]
     (let [compval (nth (vec squence) index)]
    (loop [restsq (seq squence)
             curpos 0
             curresult nil]
            (cond (empty? restsq)
            curresult
            ((comp pos? compare) (first restsq) compval)
               (recur (rest restsq)  (inc curpos)  curpos)
                  :else
              (recur (rest restsq) (inc curpos) curresult)))))

We now have to swap these two elements which is easy enough to do with vectors:

   (fact "swapping two positions in a sequence"
         (swap-positions [1 2 3 4] 2 3) => [1 2 4 3]
         (swap-positions [1 2 4 3] 1 3) => [1 3 4 2]
         (swap-positions [1 3 4 2] 1 2) => [1 4 3 2]
         (swap-positions [1 4 3 2] 0 3) => [2 4 3 1])

   (defn swap-positions [squence k l]
    (let [seqvec (vec squence)]
      (assoc (assoc seqvec k (nth seqvec l))
         l (nth seqvec k))))

I first fiddled around with take and drop to avoid converting the input sequence to a vector but this makes the code much more complex. Why there is no general-purpose position-based replace is, again, beyond me — there are a number of discussions around (the lack of) a general-purpose subsequence function which point out issues with complexity (code-and performance-wise), but I doubt that most manually-crafted workarounds lead to any better solutions. Maybe I’m missing something obvious here.

Next, we need to reverse the rest of the sequence behind the position which we just swapped. The example in the wikipedia article is not entirely clear for longer remainders, but some tests revealed that the right position is really the one we just used, like this:

   (fact "reverse the tail of a sequence"
         (reverse-tail [1 2 4 3] 2) => [1 2 3 4]
         (reverse-tail [1 3 4 2] 1) => [1 2 4 3]
         (reverse-tail [1 4 3 2] 0) => [2 3 4 1])

   (defn reverse-tail [squence tail-position]
    (let [prefix (take tail-position squence)
       tail (drop tail-position squence)
       revtail (reverse tail)]
    (concat prefix revtail)))

So, with this we now have all pieces in our hands and can test the entire algorithm:

   (fact "finding the next permutation"
         (next-permutation [1 2 3 4]) => [1 2 4 3]
         (next-permutation [1 2 4 3]) => [1 3 2 4])

Which will, surprise, surprise, give the expected results. So, with this we are able to generate all 24 permutations of [1 2 3 4] and we can go back to our anagram task. Turns out that the tests would fail: I hadn’t thought about the fact that the destructuring of the character sequence (i.e. the word) would require subsequent combination of the permutation results. That’s easy enough to correct by applying str to all permutation results.

(defn generate-anagrams [word]
    (map (partial apply str) (generate-permutations word)))

Now, when you run this code with the test data given in the original kata:

   (facts "Testing the anagram implementation"
       (fact "Generating all anagrams"
         (generate-anagrams "ftw") => '("ftw" "fwt" "tfw" "twf" "wft" "wtf"))
          (let [words (split-lines (slurp "wordlist.txt"))]
            (find-anagrams "kinship" words) => '("pinkish")
            (find-anagrams "enlist" words) => '("inlets" "listen" "silent")
            (find-anagrams "boaster" words) => '("boaters" "borates")
            (find-anagrams "sinks" words) => '("skins")
            (find-anagrams "knits" words) => '("stink")
            (find-anagrams "rots" words) => '("sort")
            (find-anagrams "thelaw" words) => '("wealth")))

I ran into a StackOverflowException for “boaster” though. Looking at the code, it’s immediately obvious that there the only possible cause for this can be in generate-permutations which generates the result eagerly. So, let’s change that to a lazy variant.

(defn- gen-perms [squenze]
    (lazy-seq
        (when-let [permutation (next-permutation squenze)]
            (cons permutation (gen-perms permutation)))))

(defn generate-permutations [squence]
    (let [start-perm (sort squence)]
        (cons start-perm (gen-perms start-perm))))

I use an external helper here because we need to add the start permutation to the final result up-front and that doesn’t lend itself to a self-recursive function. Anyway, this concludes the first solution using a rather traditional algorithm.

Declarative solution

For the next solution, I intended to use something else. Last year, I had the chance to hear David Nolen talk about core.logic which reminded me a lot of the old days in which I was using Prolog for computational linguistics and logic programming. In particular I was thinking of a permutation implementation in Prolog described in Richard O’Keefe’s Craft of Prolog, which I briefly discuss below:

permutation(Xs, Ys) :-
    permutation(Xs, Ys, Ys).

permutation([],[],[]).
permutation([X|Xs], Ys1, [_|Bound]) :-
    permutation(Xs, Ys, Bound),
    insert(Ys, X, Ys1).

insert(L, X, [X|L]).
insert([H|T], X, [H|L]) :-
    insert(T,X,L).

If you would want to generate all permutations for a list [1,2,3], you would call permutation([1,2,3],Q) and your Prolog interpreter of choice (e.g. SWI-Prolog) would generate the first possible result for Q and via backtracking generate all other possible permutations.

?- permutation([1,2,3],Q).
Q = [1, 2, 3] ;
Q = [2, 1, 3] ;
Q = [2, 3, 1] ;
Q = [1, 3, 2] ;
Q = [3, 1, 2] ;
Q = [3, 2, 1].

Let’s briefly discuss the Prolog solution, this will make it easier to discuss some issues when translating this to core.logic later on. Prolog uses facts and rules to prove some query. E.g., permutation([],[],[]). is a fact asserting that the permutation of an empty list is the empty list. Anything involving :- is a rule. Prolog uses unification — hang on, you’ll see in a second what this is. Second, you see all those [X|Xs] constructions. These are basically list (de-)construction operations: they split off the first element or add an element (head) and some rest (tail) to form a new list. The point here is that if you’re calling permutation([1,2,3],Q,Q) Prolog will try to unify [1,2,3] with [X|Xs] which is possible when X=1 and Xs=[2,3]; i.e. Prolog automatically tries argument unification. The _ construct means “ignore”, “don’t care”. If we consider only the insert fact (i.e. the first statement), this fact can be used by Prolog via unification to answer queries about any value of the predicate:

?- insert([2,3],1,Q).
Q = [1, 2, 3] 
?- insert([2,3],Q,[1,2,3]).
Q = 1 
?- insert(Q,1,[1,2,3]).
Q = [2, 3] 

The key to understand how permutation works is considering how insert works: the insert rule will deconstruct the first argument (assuming it’s a list) and insert the second argument to it. This way, X will be inserted in all possible positions of the list:

?- insert([2,3],1,Q).
Q = [1, 2, 3] ;
Q = [2, 1, 3] ;
Q = [2, 3, 1].

Now, if you take a closer look at the permutation/3 rule, you’ll recognize that it first of all contains a recursive call to itself. This will basically decompose the first argument (if given) until it reaches the permutation fact governing the base case, i.e. the empty list. It will then insert the elements according to the behavior discussed above. You can think of all comma , as and including a notion of order, i.e. the insert clause will only be used after having processed the recursive call to permutation on each level, respectively. This basically implies a depth-first search — i.e. for generating the multiple values for Q, Prolog will try to find different possible combinations by retrying parts of the computation. This will in particular trigger the computation of the different results of insert/3.

Now let’s come back to Clojure’s core.logic which provides an implementation of many useful things for logic programming based on miniKanren. However, as an add-on to a functional programming language, we will have to use some special operators to translate the Prolog code. The first thing needed is the declaration of the query variables (e.g. Q) within the call to run*, without it you would never see any results (besides run* causing the inference machinery to, well, run). The next operator is == which is used for unification, which is used just as = would be in Prolog inside some rule. Sometimes you need temporary logic variables which you can introduce with fresh. There is also an explicit operator conde (similar to cond) which can be thought of as providing disjunction (or). You need this to be able to mimick Prolog’s multiple facts/rules with the same predicate, e.g. having a simple fact and a rule for permutation/3. There are also further predicates, e.g. conso which can be used to splice/construct lists. This is all nicely explained in the core.logic Primer. I actually started out trying to convert the Prolog code with not much else, like this:

(defn insert-broken [x l nl]
  (conde
    [(conso x l nl)]
    [(fresh [h t]
           (conso h t l)
           (insert-broken x t l)
           (conso h l nl))]))

You’ll note that I exchanged the position of single argument and list in order to match it with the usual argument positions of conso (or conj). Otherwise this looks like a pretty straight translation of the Prolog rules above: it’s either we can directly (via conso) (de)construct the list or we recurse. This version is broken in multiple ways, though. First of all, when you test this version, the recursive call to insert is not constrained enough wrt. the value of l, which will trigger an infinite recursion. You need to put the recursive call behind the second call to conso (cf. the discussion of my inquiry on StackOverflow). However, there is another issue lurking which you can see when comparing the results:

(defn insert-still-broken [x l nl]
  (conde
    [(conso x l nl)]
    [(fresh [h t]
          (conso h t l)
          (conso h l nl)
          (insert-still-broken x t l))]))

FAIL "Checking insert - Simple insert" at (logic_test.clj:11)
Expected: ((1 2) (2 1))
Actual: ((1 2))

FAIL "Checking insert - Simple insert" at (logic_test.clj:12)
Expected: ((1 2 3) (2 1 3) (2 3 1))
Actual: ((1 2 3))

As you can see, this version generates only a single result, inserting the element just in the first position, not in the other positions of l. The reason for this is that we are constraining the solution too much: by using l in the recursive call, we’re constraining the “result” (the value of the third argument) to the initial value of l . This is not what we are doing in the Prolog version, there l is just a temporary value generated in the recursive call. I.e. I fooled myself by basically running into a variable capture problem. So, the correct version of insert looks like this, introducing another fresh variable l1.

(defn insert [x l nl]
  (conde
    [(conso x l nl)]
    [(fresh [h t l1]
          (conso h t l)
          (conso h l1 nl)
          (insert x t l1))]))

However, the discussion on StackOverflow also pointed me to the matching predicates which are also shown, but not explained at all in the examples section of the core.logic wiki. In particular, core.logic offers a defne macro which basically provides a pattern matching facility which is remarkably close to what Prolog provides wrt. argument matching. Consider the following version of the same predicate using defne:

(defne inserto [L X NL]
    ([L X (X . L)])
    ([(H . T) X (H . L1)]
        (inserto T X L1)))

defne will basically expand into a set of conde expressions, but will also generate fresh variables and matching/unify expressions as appropriate. If you compare this version with the Prolog version, it’s easy to see the parallels: in the second rule, the given arguments to the parameter list L X NL will be tried to unify with [(H . T) X (H . L1)] (note that inserto uses the same parameter order as the Prolog version), thereby decomposing any sequence given as L into head H and tail T — this is basically the same as (conso H T L).

Having covered all those nitty-gritty details of insert, understanding permuto/3 should be straight-forward:

(defne permuto3 [I O L*]
  ([nil nil nil])
  ([() () ()])
  ([(X . Xs) Ys1 (_ . Bound)]
      (fresh [Ys]
                 (permuto3 Xs Ys Bound)
                 (inserto Ys X Ys1))))

We have two (empty) base cases and a recursive clause again. We’re decomposing the input I into (X . Xs) and unify O (typically the query variable) to Ys1. Using a fresh new variable we recurse with the sublist Xs down to produce permutations of the sublist, eventually inserting X into them. For reference, this is what the non-matching version looks like which makes the argument unification and the value decomposition much more obvious:

(defn permutation
    ([xs ys] (permutation xs ys ys))
    ([xl yl res]
    (conde 
       [(== xl '()) (== yl '()) (== res '())]
       [(== xl nil) (== yl nil) (== res nil)]
       [(fresh [_ x xs ys bound]
        (conso x xs xl)
        (permutation xs ys bound)
        (conso _ bound res)
        (insert x ys yl))])))

We can finally wrap this permutation into the same surrounding code we used for the traditional solution to this anagram kata to compute nearly the same results (the order will differ).

Wrapping up, this kata was actually quite hard to solve and took quite a while. I spend too much time trying to find the traditional solution myself before focussing on translating it to Clojure. And then it took me also way more time than I had imagined getting into core.logic, which could use quite a bit more documentation besides the primer on the basics. Anyway, core.logic looks like a very nice addition to the Clojure universe.


Tiny cities made of ashes -- stories of agile maturity

Tom Boulet has recently triggered a discussion in the Agile+ community on Google+ about user stories, cf. the following quote from his inquiry:

A couple weeks ago, I heard someone say “The stories weren’t detailed enough for the development team..” I thought, Wha? Stories are meant to be high level and if you use acceptance criteria this comes after the team starts looking at them, doesn’t it? It sounded like the person felt that Stories had to be “fleshed out” somehow before development could get started. […] Also, I’ve recently seen talk of how stories should be small enough to code in a day or two (I’d call that a task) and another reference to “codable stories.” None of this sounds right to me. It sounds like a large number of Agilists are seeing stories as part of the design pre-development. Has the definition of Story shifted on me?

The resulting discussion is highly insightful. I particularly like one reply by Alan Dayley, which I quote in some parts:

It is hard to describe in brief sentences all the forces that pull organizations to keep them thinking in lots of details. A few might be: - A culture and history of Big Design Up Front that makes it hard for feature definition to happen just in time. - A continued need for long range planning requiring estimates for work that will possibly take months before it actually starts. - Skilled technical people who are accustomed to talking about technical details instead of the needs of the user. And more…

The most powerful and subtle force to cause “user stories” to grow is a continued lack of focus for the people defining features and those creating the features.

For example, if I am Product Owner of 5-12 products working with 3 teams who are all working on multiple products, I am not able to think of a feature, collaborate on it’s definition as it is built, see it created and then think about the next feature. I have to document my thinking of every feature because one hour from now I have to think about other features that also might not be built right away.

The key to being truly Agile is to finish stuff. The more inventory of ideas, features, undeployed code, etc. that we have, the less Agile we can be.

This highly resonates with my thinking. But it also got me thinking and reminded me of another discussion I participated in in one of Xing’s agile groups: for many companies, being able to keep stories high-level, have the Product Owner engage with the developement team just in time and clarify all those nitty-gritty details in short-enough time that the team can implement it, sound like the wet dream of agile heaven. I, too, have mostly seen teams in which there was substantial time invested by the Product Owner (at least), often accompanied by the Scrum Master and some development team members, to “groom the backlog” and to prepare the stories for the next sprint. This preparation, which more often than not involves adding lots of details according to some definition of ready, typically also includes story splitting. I’ve seen development teams straight out refuse to work on stories “too big” according to some arbitrarily set size limit. I guess the reasons behind this basically boil down to fighting FUD (fear, uncertainty and doubt), cf. this article listing potential benefits of small stories:

There are lots of benefits from having smaller stories.

From a user and usage perspective, reducing the story size forces me to understand users and usage better. It forces me to separate what really is valuable or useful to the users from what’s merely interesting. From a release management perspective having lots of smaller stories allows us more flexibility with planning, prioritizing, and cutting scope. From a development and detailed design perspective, a smaller story often implies we know more about what to build - there’s less unknown, less risk – less risk of the development work mushrooming into something much larger.

There are two obvious problems with this:

  • The development and release management perspective and their respective goals are not a good match for the business value that is driving a story. As a result of this, when you decompose stories into slizes too thin, the business value proposition, i.e. the “in order to ” part of Mike Cohn’s classical user story template, often becomes very awkward and only understandable in the direct context of some user interaction. Quick, tell me, what is the real business value of “As an order maker, I need to enter my credentials to access my orders” (cf. Godjo Adzic’s rant that writing “As a user” doesn’t make it a user story)?

  • The other problem is that often “understand users and usage better” is actually design work, as also asked by Tom Boulet in the discussion: “Can requirements really be separated from design? As requirements become more detailed don’t they really become design? Does this constrain the design part of development? […] And ultimately, can good design really be done without coding trial and error? Does a customer and PO really know what they want up front?” Here, the article by Mary Poppendieck on the Product Owner problem is helpful: according to her, the most important role of the development team is product design:

    The entire team needs to be part of the design decision process. Team members should have the level of knowledge of the problems and opportunities being addressed necessary for them to contribute their unique perspective to the product design. Only when decisions cannot be made by the development team would they be resolved by a product leader. The main team-facing responsibility of the product leader is to ensure the people doing the detailed design have a clear understanding of the overall product direction.

It would seem a development team with a “definition of ready” that requires lots of details and splitting stories into thin slizes is lacking this “clear understanding” and arguably refusing to take responsibility for designing the product. Again, my gut feeling is that this is a sign of the maturity of an organization, wrt. to agility: e.g., building this clear understanding typically requires a longevity of development teams which often clashes with the “project-based” approach taken by many organizations. And if development teams don’t know the market, the customers they are building the product for, where should this understanding come from?

In the discussion, Heiko Stapf also referred to the agile principle of “individuals and interactions over processes and tools” in the discussion, noting that “Having PO/Tester/Techguys sitting together finding a common understanding and language” is more akin to favor individuals and interactions, whereas “having the PO writing down requirements and passing them (in written form)” is more about process/tool. I find this to be indeed the case, where sometimes the underlying cause is that it’s difficult to arrange for enough room (time) to have the necessary interaction. But it might also be simply lack of collaboration, i.e. when developers just feel it’s not their job to work on (the formulation and details of) user stories.

I’ve also seen teams run into problems with getting the right information in time. IMHO they are either due to lack of knowledge on the Product Owner’s side or lack of empowerment. In the former case, when some detail needs to be clarified, the Product Owner might not have the necessary domain expertise required to answer the question right away and she needs to go back to some domain expert (which might not be immediately available, delays occur etc.). The later case is where the Product Owner has an opinion on some matter (e.g., whether to use this or that design) but feels she needs to go back to someone major stakeholder (potentially multiple). Also, when the team (PO+dev team) haven’t yet found the path to close collaboration, the resolution of some issue might require too much time to allow for implementation. In such scenarios, the idea of using a definition of ready with small stories is actually aimed at ensuring that some work can actually be finished in some time box at all (this is linked to ensuring a story is “feasible”, cf. Roman Pichler’s take on the “definition of ready”.

Of course, there are also legitimate reasons to split stories which are linked to other organizational issues: e.g., when the story is big it can’t possibly implemented by a single team in given small time box and hence needs to be split over multiple teams or multiple time boxes. Or when the requirement is yet more an epic than a “ready” story and when split, business value can still still be obvious for the smaller stories. And then there is also the process where you start out with a vague “feature idea” and need to understand it more before it would be ready for implementation, as I’ve described in my post on agile conception.

In summary, I would say that a large backlog with lots of tiny stories is a clear indicator of a lack of maturity of the team and / or organization. In an ideal world, all our teams would be mature enough that nobody would ever want to have too many detailed written specifications upfront, but alas in our reality we have to work hard to get there. Unfortunately, there is no easy answer to this than to try and persist: gain the knowledge and the confidence, build the tools and the trust you need.

ObTitle: Modest Mouse, from “The Moon & Antarctica”


Fresh strawberries -- web programming in Clojure

I recently started working on a web application in Clojure. This being my first contact with quite a lot of tools / libraries for web programming with Clojure, I ran into quite some stuff that was not obvious. First some background: It’s basically a very small application which will have only a very small number of users, so we don’t need a highly interactive, reactive UI. So, no need for a single-page application, a classical web application where the server serves up a small set of pages and has all the logic is just fine.

Starting out with ring, lein-ring and compojure was pretty straight-forward, going beyond the easy stuff not so much. One such problematic issue I ran into was feeding some configuration to Jetty (JNDI resources), as lein-ring does not seem to offer any way of reading an existing jetty-web.xml file. This implies that you’re basically down to using and configuring Jetty as described in the embedding Jetty examples. This snippet is what I’m using to declare my database connection (you’ll need to have depends for clj-dbcp, org.mortbay.jetty/jetty, org.mortbay.jetty/jetty-plus, org.mortbay.jetty/jetty-naming and javax.servlet in your project.clj):

  (ns myapp.jetty-config
     (:require [clj-dbcp.core :refer [make-datasource]])
     (:import (org.mortbay.jetty.plus.naming Resource)
                 (java.util Hashtable)
                 (javax.naming InitialContext Context)))

  (defn make-my-datasource []
     (make-datasource
     {:adapter :mysql :host 'localhost :database 'mydb
      :username "myuser" :password "mypw"}))

  (defn setup-jetty-context []
     (let [ht (Hashtable.)]
       (.put ht Context/INITIAL\_CONTEXT\_FACTORY "org.mortbay.naming.InitialContextFactory")
       (.put ht Context/PROVIDER\_URL "org.mortbay.naming")
       (InitialContext. ht)
       (Resource. "java:comp/env/jdbc/etrans" (make-my-datasource))))

This being solved, during the course of the project, I aimed for running the application on Tomcat. Figuring out how to develop with Jetty and make it run on Tomcat also took quite a while. First of all, you don’t want to have the above Jetty configuration and dependencies dragged in when running lein ring war but instead rely on the “normal” persistence.xml mechanism to define persistence units. The key to the first issue is making use of leiningen’s profiles in combination with the :init keyword of the configuration options for lein ring. I.e., I removed all Jetty configuration and dependencies from the top-level ring configuration in my project.clj and added a :dev profile to it which has the required additional dependencies mentioned above plus a :ring section with an :init key pointing to a setup function which calls the required Jetty configuration functions:

    (defproject myproject "0.1.0-SNAPSHOT"
  :description "A small web application"
  :dependencies [[org.clojure/clojure "1.5.1"]
                 [org.clojure/tools.logging "0.2.6"]
                             ;; .. other dependencies, but nothing Jetty related
                            ]
  :plugins [[lein-ring "0.8.8"]]
  :ring {:handler myapp.handler/app
  :web-xml "war-resources/web.xml"}
  :profiles
  {:dev {:dependencies [[javax.servlet/servlet-api "2.5"]
                      [org.mortbay.jetty/jetty "6.1.23"]
                      ;; etc.
                                      ]
  :ring {:handler myapp.handler/app
         :nrepl {:start? true}
     :init myapp.jetty-config/setup-jetty}}})

Making use of the persistence.xml file turned out to be pretty easy: You can place “normal” Tomcat configuration files in the directory specified by the :war-resources key. E.g. you can your persistence.xml in war-resources/META-INF/. You can also specify the location of your web.xml explicitly. I did an initial lein ring war run and have it generate an initial web.xml and modified it later on as needed. Another issue was that I wanted to run the application under Tomcat in parallel with other applications, i.e. with it’s own named web application context, which is simple enough. But running under Tomcat as some non-exclusive webapp (i.e. installing not as ROOT.war) forces you to use relative links. But how do you find out the right prefix for servlet global resources? Useful info gets added to your request in the :context, :path-info, :servlet-context and :servlet-context-path, according to ring.util.servlet. I used this wrapper for figure out what I needed:

  (defn wrap-show-request-context [handler]
     (fn [request]
        (when-let [context (:context request)]
           (logging/info (str "Request with context " context)))
        (when-let [pathinfo (:path-info request)]
           (logging/info (str "Request with path-info " pathinfo)))
        (when-let [servlet-context (:servlet-context request)]
           (logging/info (str "Request with servlet-context " servlet-context)))
        (when-let [servlet-context-path (:servlet-context-path request)]
           (logging/info (str "Request with servlet-context-path " servlet-pathinfo)))
        (-> request
             handler)))

Finally, I had a dependency on a local library which I resolve via the localrepo extension to leiningen. However, lein-ring knows nothing about local-repo extension, hence lein ring uberwar will not include the dependencies that only exist in your ~/.m2/repository/. The workaround is to make a WEB-INF/lib directory and place all required Jars in it, although figuring out all Jars that you need is a very tedious process.

Given the small scale of the web application, I decided very early on that hiccup should fit my small requirements, although I wanted to use bootstrap, too. The one problem I ran into is a typical functional programming issue: information becomes available at some point which is required exclusively in some other function way down the chain of callers. The concrete problem I ran into was actually using the right servlet-context in the main-page layout template to request the right resources. Handing down the parameter (or the request itself) to the view seems wrong, because then I basically make use of internal knowledge about the inner workings of my pages. I finally extended the wrapper above to bind a dynamic variable:

    (def ^:dynamic \*app-context\* nil)

    (defn wrap-context [handler]
       (fn [request]
         ;; ... logging code elided ...
        (binding [\*app-context\* (str (:context request) "/")]
           (-> request
                handler))))

I then refer/use this variable during the page setup. I decided against using an atom or some such, because I specifically don’t want to have other requests/threads to see/modify the current value from some other thread — this is actually intended to be a request-only accessible (global) variable. Not sure whether this is the best way to go, though, I guess there are more elegant solutions.

I had seen a talk on liberator at EuroClojure 2013, which promises a pretty declarative way of sorting out how to react to web requests, which looked nice enough to give it a try. Although things can get somewhat complex rather quickly, I still like it. One such issue is that the interaction between liberator’s context argument and compojure parameters isn’t always obvious. Basically all handlers need to take a context argument (possibly ignored), as is detailed somewhat more in the documentation of the execution model. Resources (i.e. their definitions) can have arguments as well, however, which are not related to the context — they need to be provided then when your handle calls the resource. I.e. check the following resource definition:

 (defresource someresource [someid]
       :available-media-types ["text/html" "application/json"]
       :method-allowed? (request-method-in :get)
       :exists? (fn [context]
                     {:something (find-something someid)})
           :handle-ok (fn [context]
                         (myview (get context :something))))

     (defroutes myapp
        (GET "/something/:someid" [someid] (someresource someid)))

The route definition uses compojures parameter extraction to hand someid over to the resource. The context argument to the decision function :exists? and to the handler :handle-ok doesn’t contain this, but you can access it via the parameter someid of the resource. Another thing which had me scratching my head was how to do a redirect on a GET request which I wanted as a result of logging out of the application. You have to combine liberator.representation/ring-response with ring.util.response/redirect. moved-permanently? and handle-moved-permanently or handle-see-other seem not to be intended for this, at least I was not able to use them for this purpose.

(defresource logout
  :available-media-types ["text/html"]
  :method-allowed? (request-method-in :get)
  :handle-ok (fn [context]
           (ring-response
            (response/redirect
             (str (get-in context [:request :context]) "/")))))

I also have resources which handle GET and POST/PUT requests (the same resource). You can combine this and in principle it’s as simple as it sounds, but figuring out which handler gets called when is not: for example, is handle-ok called after put! created something? Liberator does come with quite some documentation and it’s tracing facility is really helpful, but it really takes some time getting into it. For instance, I don’t think it’s documented that the list of available media-types determines the default media-type generated. I.e. if your request doesn’t specify that you prefer to accept application/json and your available media-types has text/html as the first element, you’ll get text/html. I ran into this with my unit tests: you may need to use ring.mock.response/header to set the “accept” header (and don’t get fooled by ring.mock.request/content-type). I think, in the middle to long term my biggest concern is the question how stable this API is. It’s currently at 0.10, which leaves quite a lot of numbers before implying any notion of stability.

Then, I also used friend for the authentication, which is an interesting library whose current version of 0.2 again makes me wonder about the stability of the current API. I’m currently staying very close to the bare minimum of features and so far haven’t really run into any bigger technical issues. There is one thing that is quite apparent though: while it provides role-based authorization, friend is currently missing any idea of access rights, i.e. it’s lacking a connection of roles to rights. Hence you guard functionality with calls to authorize, not with a declaration of the required rights. Friend is also seriously lacking documentation, e.g. about *identity* or current-authentication which you might want to use to determine data about the currently logged-in user. I also had issues with testing my code after introducing friend, because it’s not at all obvious how to provide the needed authorization from test code. I ended up using midje’s mocking machinery (i.e. provided) to mock authorized? which is underlying friend’s authorize macro. Note that provided assumes that the one checkable directly above is actually calling the mocked code, you can’t have provided mocking for two or more checkables or in some let construct (background pre-requisites can help with this).

Finally, I used java-jdbc for accessing the database. Again, I’m wondering how stable is this API? There were large changes with 0.3, which broke compatibility with at least one of the SQL DSLs mentioned on the project page. This includes splitting out the old SQL DSL into it’s own module java-jdbc/dsl. And alas, documentation and examples are also somewhat lacking. At least the unit tests were useful to figure things out.

So, what have I seen so far? I guess a lot of interesting technology, notable lack of satisfying documentation and APIs in widely varying degrees of stability.

ObTitle: Franz Ferdinand, from “Right thoughts, right words, right action”


Moving targets -- on agile conception

Two things are clear for every project and every product: there is always a start and an end. I’ll ignore the end for now and would like to take a closer look at the start. One has to wonder: where do all these projects or new products come from? Obviously, somebody finds out at one point that there is a need for something and this something is the nucleus of the perspective development. If this idea requires significant effort, it’s typically necessary to convince the people responsible for the money that is worthwhile to pursue the idea. Let’s take a closer look at what this something actually is and how it changes and needs to change before the developers can finally go make it real.

To use a more descriptive tag, I will use the term feature idea for now, although we could well talk about ideas for a new product or product line. Feature ideas in a very early stage are rarely more than a single, simple statement or even just a few descriptive keywords like “I would like a button here where I can save my settings” or “remember me functionality”. Of course, whoever came up with the idea has typically second thoughts, more or less fuzzy but these are not communicated, neither discussed nor finalized.

A feature idea

Quite often these second thoughts are related to some existing systems or solutions, even if it’s just to describe what the respective solutions do not provide (note that these solutions might not be your own). Typically feature ideas are about some specific expected behavior of some system and basically boil down to what needs to be build. This is all good: feature ideas don’t need further elaboration or lengthy descriptions in an early state.

How do you proceed from here to the nitty-gritty details that finally need to be done? In traditional settings, to get from the idea for a larger feature to development you need to jump over the budget hurdle, which usually requires detailed analysis results, including business cases, return on investment calculations, and implementation plans. The latter consist of solution outline, milestones, resource plans and cost requirements. It is important to note that to come up with all this substantial efforts need to be invested already. The bigger the need for precise information, the bigger the required investment. And then at some point you hopefully get the okay to move ahead. But in case the idea is turned down, all this investment in detailed plans was just wasted. In agile setups, we would like to avoid that waste. Still, of course, the need for getting a decision on where the business wants to invest its money does not go away. But in order to do so, does the senior management really need all those details? They don’t. There is a need for accuracy, not for precision (cf. for a description of the difference). Asking for all the details is a sure sign for lack of trust, but I have never seen any senior management trying to verify that all those precise details were actually accurate, which probably explains why the demand for documents and spreadsheets is all there is. This is then typically combined with taking all plans as promises that need to be kept and not as means to enable successful investments (command-and-control vs. plan-to-replan).

What is surely needed boils down to just one thing: understanding the value of an investment. It’s often much easier to determine what you want or can invest than to precisely estimate upfront what you need to invest (cf. this discussion of the no estimates idea which shows the wonderful relevant Dilbert strip on the same issue). On top-level, investment is essentially tied to strategy: e.g. in the Scaled Agile Framework (SAFe, cf. Leffingwell et al.), investment themes determine the relation between budget (resources) and the program portfolio of business epics. This essentially requires an understanding how each possible business epic or feature idea contributes to the overall company strategy (this is basically applying the Lean principle of “optimize the whole” to feature ideas).

We essentially danced around the big elephant in the room now: feature ideas often describe expected behavior, but we need to know about the value of an idea. But if all we have is a simple statement (and not the full-fledged analysis we would like to avoid) of some idea, how can we know its value? The essential insight is that it might not be necessary to understand the precise value yet. The earlier in the process we are, the less is it necessary to perform lengthy and detailed analysis: determining the value of some idea is only a means to further a decision on where to invest further. However, at least it should be possible to judge the value of one feature idea in relation or better said in comparison to other ideas, as Johanna Rothman discusses in her article on why cost is the wrong question for evaluation projects: at least in an early stage, it might be enough to use a very simple kind of rough value sizing, e.g. putting an “M” on one feature idea in comparison to a “L” for another. It is important to recognize that the value of ideas will change over time: in particular so when looked at in relation to other things your business needs to think about, as new ideas come up , some ideas get implemented and other discarded. Accounting for these changes over time is why planning to replan is important in agile methodologies. An important concept in agile methodologies that helps making this clear is the cost of delay: the idea is that you try to understand or predict how the value contribution of some feature will change over time, in particular what impact not pursuing the idea will have. This is most useful for prioritizing by cost of delay of what to invest in — obviously, as CoD is all about loss of value over time, as time moves on you want to reconsider your priorities (where you want to drill down further with an analysis etc.). (An alternative coming from the lean camp is to use an impact-effort matrix, but such a matrix needs an estimate about the effort required and also treats value as an absolute over time, which it usually isn’t. Cf. this article about cost of delay vs. WSJF in SAFe for a discussion why cost of delay and value considerations are not the same).

Inevitably, when you try to think about the value of some feature idea, you will think about what this simple statements like “remember me functionality” should actually mean. Very often the way in which we name or describe a potential feature is often a confusion (and at best a conflation) of problem and solution space and that can make it quite difficult to judge it’s value. As such, naive feature descriptions can quickly become a big impediment. For one thing, they are — due to being explicitly simple — usually ambiguous and / or often hard to understand. Does “remember functionality” describe that I stay logged in on the website or does it mean that the website remembers the visitor and shows specific ads related to his interests or both? This is often the reason why some people attribute completely different value to an idea. Worse, they typically don’t give any clue about the need behind the wanted behavior nor about the value this behavior brings. Just as bad is that simple statements of the feature idea in terms of expected behaviour (“I want a button that …”) limits creativity by already pointing in the direction of a possible solution: maybe there are other options which are much better suited, both from a usability point of view but also with regard to the required functionality. So the conflation of problem and solution is actually problematic for understanding the problem and for designing the solution.

So what is the process to move from a feature idea to the solution design? I think it’s useful to see feature idea statements as promises for analysis. The analysis of a feature idea should essentially focus on understanding the user and / or business needs the idea is related to. The stereotypical user story template “As a <type of persona>, I want to <perform X> in order <to achive Y>” is a useful tool in this respect as it is explicitly constructed to include and name the need (cf. Gojko Adzic rant Writing “As a user” does not make it a user story how this can go wrong when taken on too easy). Now you might not have such a story or might not have it yet and maybe it’s not clear what this simple feature idea is about. That’s the time to engage with whoever seems able to contribute, but first and foremost it should probably be the primary beneficiaries of the feature idea. Traditional requirements engineering practice would suggest to involve all relevant stakeholders (perspective users, management, marketing, legal department, …) and to employ whatever RE tool seems reasonable to use (e.g. brainstorming, workshops, open interviews, etc.). In agile setups, determining who actually is a stakeholder which needs to be heard is the responsibility of the product owner. I find that it is usually also a good idea to involve the team, because they will typically have a good understanding of the solution space and should also know about the envisioned user needs. Don’t take it too far, though: If you are still trying to analyze in order to understand the value of some idea, i.e. you are still at an early stage (inception phase as the disciplined agile delivery folks would have it) of your planned feature, you should probably try to keep it light-weight.

You will likely get a lot of input on your feature idea and many in the form of comparable simple statements to what you originally started out with. Understanding the needs requires going beyond the level of simple statements or simple suggestions of simple solutions (Laura Klein has a nice description of how difficult that can be: Sometimes users will tell you that they want a toaster in their car, when what they really mean is that they don’t have time to make breakfast in the morning.). And although the famous 5 Whys might come across somewhat strange or invasive in the course of an interview, it might take that many attempts to actually understand the problem that needs to be solved. If you’re lucky, a potential user will offer some simple insight as to the problem with simple statements such as “I would like a button here so I can save my settings, so I don’t have to adjust them again and again”. Further analysis might reveal that the real problem is that there is a usability problem and the user forgets to adjust the settings too often or that the user is a regular user which would like to spend less time with repetitive tasks. The main point here is that this process is about understanding the needs, which typically requires challenging any superficial and obvious ideas: this is both about maximizing the amount of work not done (simplicity) and about building solutions that satisfy the user.

Change in understanding over time
Change in understanding over time

This is now where the team basically takes over: the team is responsible for working out the solution, not only constructing / implementing it, but essentially designing the solution. This is not to say that the product owner or other stakeholders cannot or should not contribute ideas on how to solve it, quite to the contrary. It is quite essential to have stakeholders around to enable the knowledge transfer of needs from the users or stakeholders to the team. So while the separate paragraphs of this text might suggest a handover from product owner to development team, from needs to solutions, ideally this should be a smooth transition with lots of participation of the entire team and face-to-face communication. Tools like Roman Pichler’s product canvas are useful, but there is a reason why the agile manifesto recommends people interactions over tool usage. Again, for working out the solutions many ideas and tools can and should be exploited, from brainstorming over UI scribbles to models and prototypes. Typically when designing the solution, many alternatives come to mind and different people from different backgrounds will have different opinions and favorites. This is actually good, because you can embrace this diversity to think about the best possible solution. However, the best solution might not actually be the solution you want to implement, often the simplest thing that could possibly work is a reasonable way to move forward: this minimizes the effort required, thereby enabling faster time to market and thereby enabling earlier feedback on whether this solution actually works (this is the tension between trying to build the best thing vs. learning fast from feedback about what actually proves to work). People with lots of experience in designing solutions will typically also come up with all kind of challenging questions which might require drilling further down into the needs (e.g. questioning if the feature is targeted at the right people or what the legal department has to say to some feature). This will trigger more learning about the feature under consideration. But this should not be taken as an excuse to run endless analysis or design meetings: it is often a better approach to end a discussion with a decision and move ahead and find out later whether this decision was actually flawed and needs correction. It is often a good idea to consider the risks associated with some idea: the more risk, the closer you want to consider your options. In general, when in doubt about which option to chose, think about what data would be helpful to decide: performing light-weight experiments might be useful for minimizing these risks (e.g. technical spikes or usability tests with paper-pen sketches).

A brief terminology recap: an epic is just some requirement (feature idea) that is so “large” and / or uncertain that it needs to be broken down, i.e. nobody could take an epic and just implement it, the developers just have to come back and ask lots of questions. If you wanted to you could say after breaking down an epic, all you end up with are features but in agile circles these are usually split up into even smaller stories (cf. the image below). Of course, this is a completely arbitrary and often also highly context-dependent distinction. It’s also fairly common to use the user story template for describing epics. I used the term feature idea instead of epics to bring in some notion of time: epics start out as (feature or other) ideas. If you think about the timeline again, it’s clear that the closer you are to implementing some feature, the more details you need to know, i.e. the higher your level of understanding of the feature needs to be. And this is where the idea comes into play that you need to INVEST in your user stories, so that you can derive SMART tasks. This is obviously needed for the very next things you want to implement, and as any risk-aware tech guy or project lead will know, probably also a good idea for the next most things to do afterwards. Business cases are mainly needed when you are going to go for big investments and when it’s critical to understand the return on investment. But this brings us back to the discussion of value vs. cost and the discussion of precision vs. accuracy. Use cases, finally, are a useful tool when it comes to understanding some set of requirements in the context of a larger system. They can also be useful for driving discussions with stakeholders and are often a useful starting point if you like to model your problem / design before implementation. I see use cases as a useful tool on the feature level, to cover a broader range of more detailed stories. As such it does not matter much when to describe a use case: if it helps you drive your understanding of some problem, use it before drilling down to more fine-grained descriptions. If you already have a lot of details and you fear to lose the overview, it can also make sense to reconstruct some use case from more detailed stories.

Uncertainty vs. time
Uncertainty vs. time

Finally, let’s step back briefly and re-consider who is part of this solution team. It’s quite common that for feature ideas of different sizes, different people will make up the solution team. I.e. when your feature idea is concerned with some detail of some larger system, the team might just be the developer team (in the Scrum sense). For bigger features, projects or programs, it’s quite common that lead developers, design and usability specialists, solution and / or enterprise architects and maybe operational staff will play an important part in working out the solution. Whether these people are working in specialist teams (as e.g. SAFe assumes) or are part of the development teams is not of much importance, but having members of those teams working on the solution design is quite essential: if this is not the case, in the best case, the final developers will only have to rediscover and understand all the good reasons why you came up with the solution in the first place, in order to get it right. Worst, you might end up with frustrated developers which don’t take responsibility and don’t care about the build solution, because they feel they have no saying in its design.


Coding katas Clojure -- Bloom filters

Kata5 is an interesting one to do in Clojure as Bloom filters, which are a probalistic data-structure for determining set membership, are all about spending as few bits as possible (or required) to store data. When bit manipulations are required, not many programmers would jump to Lisp or Java and indeed most descriptions are about implementations in C or one of it’s derivates. This probably does not come as a surprise, but as we will see is not entirely justified (wrt. to Lisp or Java).

But before discussing this in detail, let’s dive in with this kata. The description of the kata already tells us pretty exactly what we’re supposed to build: a bunch of hash functions and an array of bits which are then set or checked. We’ll start with the hash functions first. Hash functions are dime-a-dozen, Java provides one, Clojure, too. Still it is interesting to go beyond the readily provided functions and to implement some hashing functions. The goal here is not to build perfect hash functions, but to get a feel how an implementation of one looks like in Clojure (as this blog post series is about Clojure, not about Computer Science — the code is also available in my github repository for Clojure codekatas).

As the task is here to build a Bloom filter for strings, all hash functions basically boil down to iterating over a sequence of characters, converting each character into a numerical value (i.e. applying int) and then using this in an accumulating computation of the total hash value. We start out with more or less the simplest approach possible: we simply sum up the integer values of the characters (also known as the Kernighan & Ritchie “lose-lose” hash algorithms).

  (defn sum-chars 
    "Sum up the chars of a given string"
    [charseq]
    (reduce + (map int charseq)))

Nothing interesting to see but a straight-forward map/reduce, so let’s move on and take a stab at what Dan Bernstein (djb) suggested. In its most innocent version this iooks pretty similar, but uses bit-shifting and has some magic numbers thrown in for good measure.

(defn djb-string-hash
     "Use djb's method for hashing a string"
     [charseq]
     (reduce (fn [curhash charval]
               (+ 
                 (+ (bit-shift-left curhash 5) curhash) 
               charval))
         (cons 5381 (map int charseq))))

This is already were it got interesting, because if you run this with the simple string “foobar”, you will run directly into an overflow. The clojure documentation on unchecked-add could have told me so directly, of course. I had to use a quite a bit of websearch-fu, wildly testing around and knocking my head on the table to come up with this version:

    (defn djb-string-hash
      "Use djb's method for hashing a string"
      [charseq]
  (reduce (fn [curhash charval]
               ^long (unchecked-add 
                       (unchecked-add 
                             (bit-shift-left curhash 5) 
                             curhash) 
               charval))
    (cons 5381 (map int charseq))))

This looks quite good, but is still not bulletproof, as testing it with a longer string (> 11 characters) will show. The real issue is actually due to Java: unchecked-add uses internally a data structure that can (and will) result in a Java IntegerOverflowException. Java does not have unsigned numeric types and also defaults to throwing exceptions on overflow and Clojure, leveraging the JVM, is directly affected by that issue.

      kata5-bloom-filters.core> (djb-string-hash "foobar")
      6953516687550
      kata5-bloom-filters.core> (djb-string-hash "foobarfoobar")
      ArithmeticException integer overflow  
      clojure.lang.Numbers.throwIntOverflow (Numbers.java:1388)

Compare for instance the behavior of Common Lisp: the straight-forward translation of the exact same naive implementation will not overflow (unless aggressively optimizing against safety or promising to the compiler that only certain results will occur) due to automated boxing.

(defun djb-string-hash (charseq)
  "Use djb's method for hashing a string"
  (reduce #'(lambda (curhash charval)
          (+ (ash curhash 5) curhash charval))
       (cons 5381 (map 'list #'char-code charseq))))

Clojure does provide a similar behavior with e.g. the BigInteger type, however this is of no use as Clojure’s bit-operations don’t know how to handle BigInteger data — they are only defined for the primitive data types that Java provides. Worse, converting back from a BigInteger to a primitive data type (e.g. long) will not — as one might naively expect — simply truncate the data but yield -2:

kata5-bloom-filters.core> (bit-shift-left (bigint 20) 2)
IllegalArgumentException bit operation not supported for: 
       class clojure.lang.BigInt  
       clojure.lang.Numbers.bitOpsCast (Numbers.java:1008)
kata5-bloom-filters.core> (* 2 (bigint Integer/MAX_VALUE))
4294967294N
kata5-bloom-filters.core> (int (* 2 (bigint Integer/MAX_VALUE)))
IllegalArgumentException Value out of range for int: 4294967294
       clojure.lang.RT.intCast (RT.java:1115)
kata5-bloom-filters.core> (unchecked-int (* 2 (bigint Integer/MAX_VALUE)))
-2

Similar issues arose with all of the other hash functions (i.e. sdbm and fnv), cf. the code on github — nothing interesting here, so let’s move on to the bloom filter itself.

Multiple options come to mind when thinking about the data structure to use. Let’s start out with the simplest possible option: just use a simple number. This defaults to Java long, i.e. a 64-bit number. This implies of course that we already have a rather arbitrary upper limit on the size of the bloom filter, which influences the number of possible entries and the number of false positives, cf. this overview article about the garden variety of bloom filters.

(defn bloom-add [bloom charseq & {:keys [hashfns] 
                                      :or {hashfns *hash-functions*}}]
  (reduce #(bit-set %1 %2) 
          (conj (map #(% charseq) hashfns)
                bloom)))

(defn bloom-contains? [bloom charseq & {:keys [hashfns] 
                                    :or {hashfns *hash-functions*}}]
  (every? #(bit-test bloom %) 
      (map #(% charseq) hashfns)))

(defn build-bloom [wordfile & {:keys [hashfns] 
                       :or {hashfns *hash-functions*}}]
  (reduce #(bloom-add %1 %2 :hashfns hashfns)
        (cons 0 (string/split-lines (slurp wordfile)))))

The code here is pretty straight-forward, maybe with the possible exception that we’re mapping over a list of functions in bloom-add and bloom-contains?. We could extract this part to a simple function which makes the code a little more readable.

     (defn hash-string [charseq & {:keys [hashfns] 
                                   :or {hashfns *hash-functions*}}]
        (map #(% charseq) hashfns))

This very naive implementation will run into problems right away: The hash functions will yield hash values that are itself 64 bit in size whereas the biggest bit that can be set is 63. A straight-forward fix for that is to consider the size of the ‘bloom filter’ (i.e. 64 bit) and to truncate the hash values accordingly via the modulo operation. I.e., instead of calling (bit-set bloom value) we do (bit-set bloom (mod value 64)).

Now, if you think about it, using a simple number is probably not the optimal data structure: for one, we just limited us to bit arrays that are 64 bits in size (which for instance implies that with the /usr/share/dict/words file you’ll end up with Integer/MAX_VALUE, i.e. all bits set to 1) but due to the immutable nature of the mathematical operations, we actually require a lot more space than just one long, thereby very much defeating an important characteristic of Bloom filters.

So let’s use a completely different idea and use one of Java’s mutable data structures: BitSets. This leads to the following naive, non-thread-safe implementation:

(defn bloom-add [bloom charseq & {:keys [hashfns] 
                                      :or {hashfns *hash-functions*}}]
        (let [size (.size bloom)]
             (doseq [hashval (hash-string charseq :hashfns hashfns)]
               (.set bloom (Math/abs (mod hashval size)) true))
     bloom))

(defn bloom-contains? [bloom charseq & {:keys [hashfns] 
                                            :or {hashfns *hash-functions*}}]
    (let [size (.size bloom)
          hashvals (hash-string charseq :hashfns hashfns)]
           (every? #(= (.get bloom (Math/abs (mod % size))) true) hashvals)))

(defn build-bloom [wordfile & {:keys [bloom-filter size hashfns]
                                   :or {size 1024
                                        hashfns *hash-functions*}}]
    (let [bloom (or bloom-filter (BitSet. size))]
           (reduce #(bloom-add %1 %2 :hashfns hashfns)
              (cons bloom (string/split-lines (slurp wordfile))))
        bloom))

We basically just exchanged the bit-set/bit-test functions with the respective BitSet methods and use a dynamic size. This hints at a possible generalization: we could consider multiple bloom filter implementations (types, if you want to) that need to support some sort of bit-setting and getting operation plus size. This would be the internal protocol, while bloom-add/bloom-contains? (and maybe build-bloom) form the external API.

Now, of course, we would like to fix the problem that this code is not thread-safe. As is made pretty clear in Fogus etal. book “Joy of Clojure”, Clojure’s reference types are of no use here:

“Wrapping a mutable object in a Clojure reference type provides absolutely no guarantees for safe concurrent modification. Doing this will at best explode immediately or, worse, provide inaccurate results.”

The advice Fogus etal. offer is to use the locking macro. If we combine the above idea of using an internal protocol, we can at least apply it where it is necessary, i.e. around the calls to .get/.set.

(defprotocol BloomFilterImpl
    (bloom-size [filter])
    (bloom-bit-get [filter position])
    (bloom-bit-set [filter position value]))

(extend-type BitSet
    BloomFilterImpl
    (bloom-size [filter]
        (.size filter))
    (bloom-bit-get [filter position]
        (locking filter
            (.get filter position)))
    (bloom-bit-set [filter position value]
        (if (< position (bloom-size filter))
            (locking filter
                (.set filter position value))
            (throw (IllegalArgumentException. 
                                    "position outside of bloom filter size")))))

If you wonder why the protocol does not have the add or contains? functions, this is because these operations would be part of some dictionary protocol or some such (although it is somewhat debatable if dictionaries should guarantee the absence of false-positives).

Let’s dig some more into the concurrency issue: it’s surprisingly hard to come up with a scenario where the mutability of the BitSet could be problematic. For one, we are always only adding entries by manipulating a single bit and do that in an atomic fashion that does not rely on the previous value of the BitSet in any way. For another, we don’t have any delete operation, so we can’t possibly run into the situation where some bit / some dictionary entry goes missing — assuming, of course, that all modifications to the bloom filter happen through the functions we supplied only and not by some other means directly on the BitSet outside our control. The only scenario that comes to my mind would be where one wants to keep the state of the bloom filter fixed in one thread, i.e. for some time we want to be able to deny having seen some value / word (which another thread just tried to sneak in while we were not looking). I can’t imagine a real world usage for this scenario, but that probably says more about my creativity than about anything else.

Let’s briefly discuss the options to account for this scenario: the locking scenario above is not enough as the locking occurs as part of the getting/setting operations — there is no way in which one thread could prohibit modifications to the Bloom filter during a specified amount of time with this. Of course, as locks nest you could add locking outside the calls to add new elements to the bloom filter. The other option would be to use one of Clojure’s reference types. But as discussed above, these are not useful for protecting mutable data structures, so we would need to go back to using one of Clojures persistent data structures. So, let’s briefly step aside and compare the speed of Java arrays generated via Clojure and using Clojure vectors, both on booleans:

(defn make-random-boolean-array [size]
  (boolean-array
   (take size (repeatedly #(rand-nth [true false])))))

(defn make-random-boolean-vector [size]
  (into [] (take size (repeatedly #(rand-nth [true false])))))

(defn print-flipped-boolean-array [ba]
  (let [size (count ba)]
    (loop [indx 0
           result ""]
      (if (= indx size)
        result
        (do
          (aset ba indx (not (aget ba indx)))
          (recur (inc indx)
                 (string/join
                  [result
                   (if (aget ba indx) 1 0)])))))))

(defn print-flipped-boolean-vector [bv]
  (let [size (count bv)]
    (loop [indx 0
           vec bv
           result ""]
      (if (= indx size)
        result
        (let [newvec (assoc vec indx (not (get vec indx)))]
          (recur (inc indx)
                 newvec
                 (string/join
                  [result
                   (if (get newvec indx) 1 0)])))))))

kata5-bloom-filters.core> (time (do 
                                      (print-boolean-array 
                                        (make-random-boolean-array 10000)) 
                                      true))
"Elapsed time: 587.247368 msecs"
true
kata5-bloom-filters.core> (time (do 
                                      (print-boolean-array 
                                        (make-random-boolean-array 10000)) 
                                      true))
"Elapsed time: 592.598888 msecs"
true
kata5-bloom-filters.core> (time (do 
                                      (print-boolean-vector 
                                        (make-random-boolean-vector 10000)) 
                                      true))
"Elapsed time: 76.657272 msecs"
true
kata5-bloom-filters.core> (time (do 
                                      (print-boolean-vector 
                                        (make-random-boolean-vector 10000)) 
                                      true))
"Elapsed time: 69.666769 msecs"
true
kata5-bloom-filters.core> (time (do 
                                      (print-boolean-vector 
                                        (make-random-boolean-vector 10000)) 
                                      true))
"Elapsed time: 71.897087 msecs"
true

Now if I run this a reasonable number of times, it appears that for simple element access the boolean vector is outperforming the boolean array, even if I’m basically doing building up a new partial copy of the vector all the time / for all elements. That was a welcome surprise for me. So, using a vector of booleans would be a viable next possibility. The road ahead, however, is hindered by the fact that Clojure does not derive a complex type for vectors, not even if you especially declare the vector to be of a certain type. You’ll always end up with a vector or Vec:

kata5-bloom-filters.core> (into (vector-of :boolean) [true false])
[true false]
kata5-bloom-filters.core> (type *1)
clojure.core.Vec

This is problematic as we don’t want to just extend-type the general type. The way out of it is the use of reify which is just like extend-type for objects (that’s actually quite misleading, if you look at the documentation of reify, go check yourself — I would also recommend reading up about this in “Joy of Clojure”).

(defn make-bloom-vector [size]
  (let [bloom-vect
        (ref (into (vector-of :boolean)
                   (take size (repeatedly #(identity false)))))]
    (reify BloomFilterImpl
      (bloom-size [_]
        size)
      (bloom-bit-get [_ position]
        (nth @bloom-vect position))
      (bloom-bit-set [_ position value]
        (alter bloom-vect assoc position value)))))

As you can see, I’m using a closure to store the vector of booleans in a ref. We then deref the vector on get and alter it on set. This is especially set up such that any transaction handling (i.e. dosync) occurs outside of the implementation details, to account for the scenario discussed above. Of course, whether refs are really best suited for handling any concurrency situation is likely to depend ultimately on the application specific context or concurrency requirements.

I will leave it at that. This time, we have seen quite some more features that are pretty much unique to Clojure: explicit protocols, reification of instances and references, i.e. using Clojure’s STM. There was also quite a bit of discussion of some consequences of Clojure’s decision to leverage the JVM, some bad (overflows), some nice (plugging in a readily available data structure). Overall, this kata was quite an interesting exercise.


Coding katas Clojure -- Data munging

Kata4 is concerned with data munging, basically reading some file, pulling out some data and comparing the data in order to determine some result. Actually, it is even simpler than that as in both cases we are asked to determine the minimum of the data set. Given a weather task and a soccer task, we are asked to fuse the resulting solutions and extract the commonalitites and minimize both solutions, basically allowing for code-reuse as much as possible. Unfortunately, I had not read the opening sentences of the kata closely, instead I read over the entire description. Hence I missed the call to solve each part separately and not to read ahead and directly started with this idea of code re-use in mind. However, as we will see, I didn’t really notice one aspect of code reuse until I solved the second task (soccer).

The first thing I did was take a closer look at the data file. Probably as much as everybody else who had some exposure to Perl, I was initially tempted to approach the data extraction part with regular expressions. But as JWZ said, “some people when faced with a problem say “I know, I’ll use regular expressions. Now they have two problems”, and indeed this is the case here. This is not to say that the particular problem, fetching the date, minimum and maximum temperature from the provided data file would not be solvable with regular expressions, but more that the data file is really more of a fixed width nature. Unfortunately, there are some irregular elements, e.g. the missing values in the WxType row or the post-fix ‘*’ added to exactly one value in both MnT and MxT rows, likely to mark the absolute minimum and maximum temperature of the month, respectively, which causes the data in the corresponding “cell” to hang-over into the white-space pre-fix area of the next row. The last column is also of interest, as it starts out with “mo” in contrast to the numeric values for the days of the month. It also has rational values for some rows (e.g. for the temperature rows) where all other values have integer values. This looks like a computed result line, showing the averages of the respective values for the entire month and should hence not be part of the computation. And finally, of course, the data file has some completely irrelevant lines, which we need to skip over.

Having come to some idea of how the data file was structured, I came to the conclusion to parse each line using exact positions, e.g. to determine the MnT value by extracting the substring of each line from position 9 to 14. So, the top-down approach is like this: we will open a file and for each line in it and we will try to parse it according to some pattern specification where the pattern specification would specify the positions of some part and a parsing function.

Clojures approach to file handling is a little bit surprising for somebody coming from Common Lisp, which provides three main concepts you need to grasp: filenames aka pathnames, files and streams. Typically, you open a file with with-open-file which will guarantee that the file will be closed after you leave the block. Clojures with-open macro abstracts this idea of safe file handling to the next level in that it is not restricted to files. However, with-open does not simply work with a filename as an argument, you need to pass in a resource which follows the open/close protocoll which a simple filename string does not. That Clojure leverages the Java IO library for this is not surprising, but that it leaks this Java dependency to the users is. I assume this implies that (with-open (clojure.java.io/reader "/some/filename.txt")) will not work on ClojureCLR (apparently slurp does). read-line is also a false friend for a Common Lisp programmer, as it can only be used for reading a line from the REPL but not for reading from some stream like in CL. While I appreciate the possibility to call Java methods from Clojure, I prefer using any abstractions that Clojure provides, so I went with line-seq instead of calling .read. Finally, as Clojure does not allow for re-assignment, so we need a recursive approach with an accumulator to hold intermediate results while looping over the lines. So, the skeleton looks something like this in Clojure:

 (defn read-some-file
   "Skeleton for reading some file"
   [filename]
   (with-open [rdr (io/reader filename)]
      (loop [lines (line-seq rdr) result]
        (if (empty? lines)
             result
      ; todo for the weather task: 
      ; compare the result of parsing a line with the current 'best' 
      ; i.e. minimum spread value so far and recur, possibly with different data
              (recur (next lines) result)))))))

This is more or less the equivalent of Perl’s -n command-line switch or while (<>) {...} construct.

The comparison mentioned in the code’s comment is actually very easy and what needs to be put in place for the result is also obvious. But in order to do that we have to parse the lines first. Following up on the idea of how to parse a line, the relevant function is subs which returns a substring from start to end. Now, we basically need to parse different substrings for the various data fields, quite often returning just an integer. Again, the simplest way of parsing a string to an integer is provided by a thin layer on top of a Java library. Neither the re-find nor the exception handling would be necessary if we could guarantee that the data would always only consist of correct data, this way we just silently skip over invalid data.

(defn string-to-int
  "Parses a consecutive set of numbers into an integer or return nil"
  [string]
  (try
    (Integer/parseInt (re-find #"\d+" string))
    (catch Exception e nil)))

parse-line is fairly straight-forward: the basic idea is that we have some specification of how a line is structured. This specification is a map of start and end positions plus parsing functions. So for the weather data, the pattern looks like this:

(def day-pattern
  ;this pattern is not complete and could be extended
  (hash-map :day [1 4 #(first-word %)]
            :MxT [5 8 #(string-to-int %)]
            :MnT [9 14 #(string-to-int %)]
            :AvT [15 20 #(string-to-int %)]))

first-word is another small helper function which basically just retrieves the first continous non-whitespace characters of a string:

(defn first-word
  "Returns first consecutive non-whitespace chars from string"
  [string]
  (re-find #"\S+" string))

parse-line than just loops over all parts of a pattern, extracts the substrings and calls the parsing function. It recursively conjures up a hash-map with the extracted data or returns nil if some parsing error occurs.

(defn parse-line [line pattern]
  "Parse a line with data in fixed positions using pattern.
Pattern should be a map consisting of a key for the data to return,
a start and end position and a parsing function for each data element.

Returns a map with all extracted data or nil for unparsable lines."
  ; loop solution with accumulator for results
  (loop [remkeys (keys pattern) linemap {}]
    (if (empty? remkeys)
      linemap
      (let [key (first remkeys)
            [start end parsefn] (get pattern key)
            value (parsefn (try
                             (subs line start end)
                             (catch Exception e nil)))]
        (if value
          (recur (rest remkeys)
                 (conj linemap
                       (hash-map key value)))
          ; silently skip any parsing errors
          nil)))))

This then allows us to put things together: we only need to compare the difference between MxT and MnT of the current line with the previous smallest temperature spread. We will use destructuring of the result of hte parse-day results to retrieve the needed data, sprinkle in some sanity checks and are done.

(defn parse-day
  "Parse a day from a line"
  [line]
  (parse-line line day-pattern))

(defn find-lowest-temperature
  "Return day in weatherfile with the smallest temperature spread"
  [weatherfile]
  (with-open [rdr (io/reader weatherfile)]
    (loop [lines (line-seq rdr) minday 0 minspread 0]
      (if (empty? lines)
        minday
        (let [{mnt :MnT mxt :MxT curday :day} (parse-day (first lines))            
              curspread (when (and mnt mxt) (- mxt mnt))]
          (if (and curday curspread
                   (or (= minspread 0)
                       (< curspread minspread)))
            (recur (next lines) curday curspread)
            (recur (next lines) minday minspread)))))))

When I started working on the second task, solving the soccer issue, I did a simple copy and paste of the find-lowest-temperature, added a new pattern for extracting the data and made the small changes to adapt to the different fields. I also understand the comparison requirement to look at the absolute difference. This leads to the following functions:

(defn abs 
  "Returns the absolute value of x" 
  [x]
  (if (pos? x) x (- x)))

(def soccer-team-pattern
  ; this pattern is not complete
  (hash-map :pos [1 5 #(first-word %)]
            :team [7 22 #(first-word %)]
            :fval [43 45 #(string-to-int %)]
            :aval [50 52 #(string-to-int %)]))

(defn parse-soccer-team
  "Parse a soccer-team from a line"
  [line]
  (parse-line line soccer-team-pattern))

(defn find-minimum-goal-difference 
  "Return team in soccerfile with the smallest difference in for and against goals"
  [soccerfile]
  (with-open [rdr (io/reader soccerfile)]
    (loop [lines (line-seq rdr) minteam 0 mindiff 0]
      (if (empty? lines)
        minteam
        (let [{aval :aval fval :fval curteam :team} 
              (parse-soccer-team (first lines))            
              curdiff (when (and aval fval) (abs (- fval aval)))]
          (if (and curteam curdiff
                   (or (= mindiff 0)
                       (< curdiff mindiff)))
            (recur (next lines) curteam curdiff)
            (recur (next lines) minteam mindiff)))))))

This, of course, led straight to the insight that it should be simple to extract the slight differences and make them parameters to some find-*-difference function. The following things are differently: the parsing pattern, the extraction function for the result value and the function used to compute the difference between values. If you would want to it would also be possible to make the comparison function configurable.

(defn find-some-difference 
  "Return some result from a data file which has some lowest difference"
  [filename parse-pattern resultkey diffn]
  (with-open [rdr (io/reader filename)]
    (loop [lines (line-seq rdr)
           result nil
           mindiff 0]
      (if (empty? lines)
        result
        (let [data-map (parse-line-map (first lines) parse-pattern)
              curresult (get data-map resultkey)
              curdiff (diffn data-map)]
          (if (and curresult curdiff
                   (or (= mindiff 0)
                       (< curdiff mindiff)))
            (recur (next lines) curresult curdiff)
            (recur (next lines) result mindiff)))))))

(defn find-mingoal-diff-fusion
  "Return team in soccerfile with the smallest goal difference, using the fusion fn."
  [soccerfile]
  (find-some-difference soccerfile soccer-team-pattern :team
                        (fn [{aval :aval fval :fval curteam :team}]
                          (when (and aval fval)
                            (abs (- fval aval))))))

There there was another itch I wanted to scratch: the parse-line function has some ugliness to it. For starters, it is handling possible exceptions from subs directly. It is also checking return values for nil. Both cases are what Common Lisp would see as conditions rather than real exceptions — it is rather unfortunate that Clojure opted for the more simple, although more traditional exception concept from Java. To remedy the uglyness of parse-line we can simply replace the direct call to subs with a small handcrafted call which manages any exceptions and also change the behavior of parse-line to simply return nil for all unparsable elements. But there is more that makes parse-line ugly: I dislike the recursive nature of the solution and the linear result handover in the let declaration (well, this handover was intentional to not have a functional train-wreck of calls). I wanted to see whether I couldn’t come up with a more elegant map/reduce solution. Here you go:

(defn substring
  "Returns substring from start to end from string or nil"
  [string start end]
  (try
    (subs string start end)
    (catch Exception e "")))

(defn parse-line-reduce [line pattern]
  "Parse a line with data in fixed positions using pattern.
Pattern should be a map consisting of a key for the data to return,
a start and end position and a parsing function for each data element.

Returns a map with all extracted data which maybe empty."
  ; map-reduce version
  (reduce #(conj %1 %2)
          (concat [{}]
                (map
                 (fn [[key [start end parsefn]]]
                   {key (parsefn (substring line start end))})
                 (seq pattern)))))

We are simply mapping over the entire pattern and use argument destructuring again to extract the relevant parts of it, but this time, due to the call to seq a pattern part will be a sequence, not a map. The map call with the anonymous function will produce a sequence of hashmaps with key and parsing results, which then gets reduced to a single map. In order to use reduce, you have to provide a function taking two arguments: the first will consume the intermediate result, the second will be the next value of the sequence to reduce. This is the reason why we have this ugly concat [()] ... in front of the call to map: we need to provide the initial value for reduce which in this case is an empty hashmap. An even more concise version replaces the call to reduce with into, resulting in a version which looks pretty idiomatic to me and is also way easier to understand then the lengthy recursive version above.

(defn parse-line-map [line pattern]
  "Parse a line with data in fixed positions using pattern.
Pattern should be a map consisting of a key for the data to return,
a start and end position and a parsing function for each data element.

Returns a map with all extracted data which maybe empty."
  (into {}
        (map
         (fn [[key [start end parsefn]]]
           {key (parsefn (substring line start end))})
         (seq pattern))))

Of course, we can use a similar approach for the find-*-difference function. We could also take a slightly different approach and sort the parsing results and then treat the minimal value as the result. If we combine this with slurp the resulting code also becomes way more compact. In order to filter out only partially parseable lines, we need to supply a list of keys that the parsing result must have values for.

(defn sort-diff-map
  "Return some result from a data file which has some lowest difference"
  [filename parse-pattern desiredkeys diffn]  
  ; map-filter version
  (sort-by diffn 
           (filter #(every? (partial get %) desiredkeys)
                   (map #(parse-line-map % parse-pattern)
                        (string/split-lines (slurp filename))))))


(defn find-mingoal-map
  "Return team in soccerfile with the smallest goal difference, using the sort-map fn."
  [soccer-file]
  (get (take 1 
             (sort-diff-map soccer-file soccer-team-pattern 
                            [:team :fval :aval]
                            (fn [{aval :aval fval :fval curteam :team}]
                              (when (and aval fval)
                                (abs (- fval aval))))))
       :team))

Summing up, what have we seen during this kata? Clojure’s platform dependent approach to reading files, usage of regular expressions, destructuring (again), anonymous functions (again) and map/reduce. Overall not very exciting. I know it’s not a fair comparison, but I would always opt for solving such tasks with Perl, especially if they are so trivial as in this case in which you can solve each task with a one-liner, basically. There is room for using more elaborate languages (e.g. Python, Ruby, Clojure) if parsing and processing become so elaborate that it makes sense to have more structure in the code. But for a task of the size of this kata, the amount of code required is usually not worth the effort.


Coding katas Clojure -- introduction and overview

I wanted to learn Clojure for a while but never made much progress beyond the level of where I had a running slime connection to Clojure and had a simple initial leiningen project file for a web project with ring. I just got distracted by other stuff but more importantly, I had no goal for a web project, so I already failed with thinking about what I would build. Consequently, I stopped reading Fogus and Housers nice book Joy of Clojure. After quite a while (close to a year), I stumbled upon some posts reminding me of the code katas on the pragmatic programmers web site (cf. code katas). Somehow this triggered the idea that doing these katas might be a good way to learn Clojure finally.

In the linked blog posts I’ll describe the various solutions I worked out. This post here will serve as the main overview and get updates as I move along.

The code (as well as the original markdown for these articles) can be found on github.

Kata 14 concludes my little journey into the original katas. It took far too much time writing these, as I didn’t had enough time to work on them continuously. Besides, many of the katas I’ve left out seemed rather uninteresting and some are thought experiments.


Goto conference Zürich 2013

Last week, I visited this years Goto Zürich 2013, which is a two-day conference with tutorials wrapped around it. Zurich Topic areas range from lean start-up over technology to a leaders track. I spent most of my first day on the so-called leaders track as here most talks revolved around adopting agile and lean methods, but also went to some more technical talks in other tracks, too. I’ll summarize most of the talks and my impression below. Btw., the slides to most of the talks can be found on the conference website, more precisely on the schedule overview for the two days.

The conference started out with a keynote by Scott Ambler on his work on Disciplined agile delivery which provides a framework on where decisions are needed when implementing agile methods in a larger setup, drawing from many different methods like Scrum, XP, Kanban, the Scaled Agile Framework and many others. The first part of his talk was mainly concerned with the question if agile holds up to its promise of delivering ‘better’, which he tries to answer with regular surveys. These seem to confirm that team size, location, complexity and methods used do have an impact. Not surprisingly, small team size are more successful than larger, co-located teams better than distributed ones and simple projects are much more likely to succeed than complex ones. And of course, there are still projects using waterfall-like approaches that succeed, while the difference between an iterative and an agile approach are rather minimal. Still, however, the number of project failures are always high, even for simple projects there are more failures than one might expect.

The first talk here was about Spinning by Ralph Westphal, which is well-known e.g. for his association with the Clean Code Developer initiative. His premise was that today’s business reality is a constant change of priority which is at conflict with assuming bigger time boxes during which developers can focus on some goal. His answer is that we should ‘seize the day’ (as Dan North might have put it) and deliver ‘value’ daily. It’s important that this value should be something worthwhile to the customer and the customer should be able to give feedback daily as well. I’m wondering whether it is really always possible to accomplish this, e.g. fixing a bug might require analysis well beyond a day. But even if you might not always hold up to the idea, it might still be a worthwhile guideline for organizing work. Another question which I don’t have an obvious answer for is the question who should be in charge to decide what should be worked on on any given day? Ralph required a thorough triage to be carried out to avoid wasting time, but it’s unclear whether this job belongs e.g. in the hands of the product owner.

Dominik Maximi spoke next about ‘Hostile waters’, e.g. how company culture might influence your chances and approaches to introducing agile in an organization. He made the important point that every company culture exists because it is (or was in the past) successful to work like that. This needs to be respected when you want to change something fundamentally. He then gave a nice overview of the Schneider model on how to classify company culture. Non-representative survey results indicate that ‘agile’ has a similar characteristics to ‘collaborative’ or ‘cultivation culture’, but doesn’t fit in so nicely with a ‘competence’ culture or ‘control’ (no surprises here). Changing the mindset of a company might take up to 7-10 years. Dominik finally discussed John P. Kotter’s work on change steps to implement agile.


Coding katas Clojure -- Karate chop / binary search

The first real programming task on CodeKata is Kata 2, Karate Chop. Or as the introduction says:

A binary chop (sometimes called the more prosaic binary search) finds the position of value in a sorted array of values.

I started out with the simplest approach I could think of: there just might be a function already readily available in Clojure which solves the problem. After all, Common Lisp has position whereas Pythonistas would use the index method on lists. I couldn’t really find a function on clojure sequences which would immediately take care of the issue, but keep-indexed seems close enough — taking a function and a collection, it calls the function which ought to take an index and a value, and keeps the function’s returned non-nil values. This led to the following code:

(defn chop [x coll]
    (let [result
            (keep-indexed
               (fn [idx item]
                 (when (= item x) idx))
               coll)]
        (if (empty? result)
            -1
          (first result))))

Some points may be of interest here:

  • Looking over some code, I ran into the usage of %1 etc. to refer to implicit arguments, which I didn’t knew about when I started out. This version screams ‘Common Lisp’ pretty much all over.
  • I also ran into the empty sequence <> false issue, of course and had to look up ‘empty?’ as well.
  • Using when instead of if when you only care about the true state is an idiom I knew and love from CL. The if part at the end is still ugly. We could get rid of it by relying on first on an empty sequence to return nil and using this in a boolean comparison.

This leads to the following much shorter version:

(defn chop [x coll]
    (let [result (keep-indexed #(when (= %2 x) %1) coll)]
        (or (first result) -1)))

The next idea is to use a multi-method approach, dispatching on either values, possibly empty ones and on type, of course. This is an approach which I think should be possible with CLOS, but is quite outside of mainstream object-oriented languages like Java or Python. We could combine this with a recursive approach. One base case of the recursion would be the empty collection, of course, with the other one being finding the searched value, returning the current index in the sequence which we have to carry around (straight forward recursion). ClojureDocs example 683 has a nice blue print (http://clojuredocs.org/clojure_core/clojure.core/defmulti#example_683). The result has a nice declarative touch to it, which reminds me of my old Prolog days:

Categories: Lisp
Defined tags for this entry:

Coding katas Clojure -- setup

While not being a kata, setup of the environment in which it’s possible to do the programming for them is a task that needs to be fulfilled anyway. I hence see that as some sort of a separate kata, to familiarize oneself with various development environments.

These are the requirements for setting up a Clojure development environment on Windows, using a portable apps approach. In more detail these are the exact requirements:

  • All portable applications are available on drive J: This is a USB stick in my case.
  • We’ll need git, emacs and clojure of course.
  • We’ll also need leiningen at some later stage.
  • Configuration of the application will be stored on the external drive (e.g. J:) as well where possible.
  • We’ll use nrepl instead of slime/swank. I never got the latter set up to work correctly as portable apps.

This describes the setup I’m currently using. First of all, the downloads. For the stuff that has repositories on github, I would suggest using git clone <repositoryURI> (after you’ve installed git in the first place, of course).

  • Emacs (24.2 at the time of writing) can be downloaded from the FSF Emacs server
  • git (1.7.6 at the time of writing) can be downloaded from msys
  • leiningen requires wget, which can be installed e.g. from here. Another option would be to install MinGW with git and wget, cf. MinGW
  • clojure (1.5.0 at the time of writing) can be downloaded from Clojure.org, of course.
  • leiningen version 2 can be downloaded from leininigen github repo
  • clojure-mode version 2 can be installed via Marmalade/ELPA or manually from it’s github repository
  • nrepl.el can be also be installed via Marmalade/ELPA or manually from it’s github repository
  • I’ll throw in magit for smooth Emacs interaction with git, to be fetched via Marmalade/ELPA or manually from magit’s repository

I’ll use the following directory layout: All applications are stored under J:\\progs\, e.g. Emacs 24.2 will end up as J:\\progs\emacs-24.2\. I put the clojure.jar into J:\\progs\\clojure\ and will put lein.bat along with it into the clojure directory. The following shows the resulting directory as shown by dired:

  J:
  insgesamt 5124
  drwx------  3 schauer schauer   16384 Nov  1  2011 emacs
  drwx------ 12 schauer schauer   16384 Okt 16  2012 progs

  J:\\progs:
  drwx------  8 schauer schauer     16384 Nov  1  2011 clojure
  drwx------  8 schauer schauer     16384 Okt  7  2012 emacs-24.2
  drwx------ 11 schauer schauer     16384 Nov  1  2011 git
  drwx------  8 schauer schauer     16384 Nov  1  2011 wget

My Emacs configuration resides in a separate directory on J:, namely in J:\\emacs\. As I already have quite a lot of emacs configuration, I’m going to put all configuration options into separate files, which are placed in J:\\emacs\elisp\config\. Code from other people will go in separate directories as well, with J:\\emacs\elisp\others\ as the top-level folder. clojure-mode hence goes to J:\\emacs\elisp\others\clojure-mode. nrepl.el is a mode but a single file and goes straight into J:\\emacs\elisp\others\. Emacs looks for default.el or site-start.el during startup to look for personal or site-wide configuration. Both files can be placed in the site-lisp directory, i.e. in J:\\progs\emacs-24.2\site-lisp\

  J:\\emacs:
  drwx------  6 schauer schauer   16384 Nov  1  2011 elisp

  J:\\emacs\elisp:
  drwx------ 3 schauer schauer 16384 Nov  1  2011 config
  drwx------ 4 schauer schauer 16384 Nov  1  2011 development
  drwx------ 6 schauer schauer 16384 Nov  1  2011 others

Next we need to adopt the load-path, i.e. where Emacs looks for libraries. This means we need to put some content in J:\\progs\emacs-24.2\site-lisp\default.el that takes care of figuring out the drive letter and sets paths correctly:

(defun get-drive-from-filename (filename)
  "Returns a windows drive letter if filename contains a drive letter."
  (if (string-match "^\\(.:\\)/" filename)
      (match-string 1 filename)))

(defun get-drive-for-emacspath ()
  "Returns windows drive letter for the drive emacs can be found on."
  (get-drive-from-filename (getenv "EMACSPATH")))

(let ((emacsdrive (get-drive-for-emacspath))
       loadpath-additions)
  (dolist (dirname
       '("/emacs/elisp/"
         "/emacs/elisp/config/" 
         "/emacs/elisp/others/"
         "/emacs/elisp/others/clojure-mode/"))
    (setq loadpath-additions
      (cons (concat emacsdrive dirname) loadpath-additions)))
  (setq load-path
    (append loadpath-additions load-path)))

(require 'nrepl)        
(require 'clojure-mode)
(setq clojure-mode-inf-lisp-command 
      (concat (get-drive-for-emacspath)
           "/progs/clojure/lein.bat repl"))


(require 'magit)
(setq magit-git-executable
      (concat (get-drive-for-emacspath)
           "/progs/git/bin/git"))

The next step is to install leiningen. There are two ways: either downloading lein.bat and running it from cmd or downloading lein, the shell script and running it via the git bash prompt. I chose the latter. You will probably need to adjust your path to where you put the lein shell script, e.g. (bash syntax):

export PATH=$PATH:/j/progs/clojure/

To install leiningen locally (i.e. not in your %HOME%), you have to set the LEIN_HOME environment variable, i.e. like this (bash syntax):

export LEIN_HOME=/j/progs/clojure

Remember to always set this variable afterwards before running leiningen commands. Point your classpath to where you installed clojure:

export CLASSPATH=/j/progs/clojure/clojure-1.5.0/clojure-1.5.0

If you don’t want to set all these variables all the time, you can put them either in a .profile file in your %HOME% or in the global profile file that comes with git which resides in /j/progs/git/etc/. I added the following lines:

CLOJUREPATH=/j/progs/clojure 
if test -x $CLOJUREPATH
then 
     export PATH=$PATH:$CLOJUREPATH
     export LEIN_HOME=$CLOJUREPATH
     export CLASSPATH=$CLOJUREPATH/clojure-1.5.0/clojure-1.5.0
else
     echo "Can not access /j/progs/clojure"
     exit 1
fi

To figure out how to get rid of the hardcoded drive letter in bash is left as an exercise to the reader.

If you also want to keep the files / jars which leiningen retrieves in a local, non-standard maven repository, you need to set a variable in your $LEIN_HOME/profiles.clj file, like this:

{:user {:local-repo "j://progs/clojure/.m2/"
        :repositories  {"local" {:url "file://j/progs/clojure/.m2"
                                  :releases {:checksum :ignore}}}
        :plugins [[lein-localrepo "0.5.2"]]}}

Then run lein self-install. Afterwards, a lein repl should give you a Clojure read-eval-print-loop.

Now if you want to use nrepl and would like to use the support for nrepl/inferior-lisp which comes with clojure-mode you need to add a corresponding dependency to your project.clj for each project, cf. nrepl installation


Page 3 of 6, totaling 53 entries