Make it so! Decision making in software architecture

I find it interesting in discussions with developers that many have a bad picture of the job of a software architect. When thinking back to my first encounters with a software architect, who was performing his document reviewing job in a dedicated architecture department, I can totally relate, but then again it is 2021 and things should have changed [1]. After all, most developers have a very keen interest that the architecture of the system they are working on is good, but still some developers do not seem to be aware how much their work is impacting the architecture (and vice versa). “Every interesting software-intensive system has an architecture. While some of these architectures are intentional, most appear to be accidental”, as stated in Grady Booch’s article on “Accidental Architecture”. Some people now seem to believe that accidental architecture is the new norm, as “the best architecture, requirements and design emerge from self-organizing teams” (from the principle in the Agile Manifesto). In contrast, in organizations which have dedicated software architects, some people seem to think that it is the architect’s responsibility to ensure that there is a well informed design and to make the hard decisions as necessary. These hard decisions would probably be about “the important stuff (whatever that is)” that is called out in Martin Fowler’s article “Who needs an architect”.

But what does that mean for the daily work, should developers send all questions to the architect for her to make decisions and should architects sit in their ivory tower pondering them and ordering decisions by importance? Not in my view of the world. I’m a big fan of the role model that Fowler describes as Architectus Oryzus or acts as primus inter pares in Georg Hohpe’s article “Agile and Architecture: Friend, not Foe” in which the architect is closely working with the developers and who tries to identify important topics and make sure they are getting addressed at the right time.

Sketch image showing "A Design thinking Workshop", by Jose Berengueres, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons.
A Design thinking Workshop
Image by Jose Berengueres, CC BY-SA 4.0, via Wikimedia Commons.

Stefan Toth has a fantastic (German only) book on agile processes for software archictecture Vorgehensmuster für Software Architektur which describes patterns how to make this happen in daily practice. The core point of the book is that architecture should be a shared responsibility. Everybody should try to raise and address important concerns — whether there is a dedicated architect role or not is a separate issue. Toth lists patterns like identifying quality scenarios, listing architecture topics in the backlog, ad-hoc architecture meetings, common decision making and testing for architectural concerns (and many more) many of which are very team centric. The book is so great because it shows that it is easy to make architecture work a natural aspect of the development workflow and not some high ceremony.

In my experience, technical discussions are often concerned with architectural topics, so you might not even need to bring people together for formal architecture meetings — nevertheless technical design review meetings can be beneficial if they involve people that can and do share new perspectives but who would otherwise not be working so closely with the team. Similarly, if your team already adds technical debt and other non-functional items into the backlog, you probably are already following two of the recommended practices.

The role of the architect is then not so much the one of a decision maker but more the one of the person taking responsibility that architecture aspects are considered by the team. In bigger organizations this implies that the architect should be working closely with the development team but not only. While ideally the developers are also closely collaborating with any other important stakeholder (product owner, devops / operations, other dev teams), it is quite vital for an architect to understand the different (and potentially conflicting) quality needs of these parties. Again, probably even better than having e.g. 1:1 meetings between architect and security officer, so that the architect can point out operational constraints to the development team, would be if the architect tries to make sure that a security expert is involved in design and requirement discussions.

Hippo with tongue stuck out (via Pixabay)
Highest paid person’s opinion.
Image taken from Pixabay

However, the job of an architect is not only consisting of organizing meetings with the right people. In the end, intentional architecture is in need of decisions. “Common decision making”, as listed above, means two things to me: first, people who are affected by a decision (e.g. the developer who has to implement the result) should be directly involved in the decision making process. In teams that do not have a dedicated architect this seems naturally to be the case, but it is worth pointing out that this can easily degrade to single programmer decisions. The other anti-pattern here would be that the architect, likely to be the highest paid person in the room, makes a HiPPO decision or specifies a design while at the same time not being close enough to the problem and / or the existing code base. [2] The effect here can be devastating if the decision turns out to be wrong: not only will people think that the architect did not listen to the team but also that the architect might not have the necessary competence. This can result in serious psychological safety and trust issues. A good working approach to avoid such problems is running small experiments (spikes) to validate options.

The second aspect is that a decision is made in the first place, which is even more important. The idea of decision in the last responsible moment notwithstanding, I find it surprising how hard it can be for organizations / a group of people to take decisions. It is important for an architect to be able to guide a group towards making a decision but also to be comfortable to make a decision themself, if the group cannot find an agreement. The biggest issue here are the trade-offs of potential solutions: e.g., one solution might have benefit A (e.g. scalability) while being weaker in B (e.g. ease of use) and another solution has opposite characteristics. Often different people will have different opinions whether A or B is more important in the selection process. It is very helpful for teams to come to an understanding (not agreement) about these different weighs that people give to the characteristics and then hopefully, the architect can explain why she thinks that one argument is more important than the other. Coming to a decision is more important than convincing everybody, though: aim for solutions that everybody in the team can agree with, even if they would still prefer a different one. Also, it is a good idea to document decisions in a light weight fashion (e.g. in a small Architecture Decision Record).

In case that during implementation the developers run into obstacles, it is quite likely that the developers will actually figure out a different solution than the one decided. This is another case where the “architect as lonely decision maker” can lead to a lot of unnecessary friction. In practice, the choice of a different approach is not a problem as long as it is handled in a similar responsible way, where the developers take a look at the implications and trade-offs of their choice and involve others (and, please, update that ADR).

Footnotes:

[1] I am obviously not talking about so-called Enterprise Architects who are mostly concerned with governing how many systems at a company interact to provide the most business value. Instead the focus here is on software architecture for a single system (which, of course, is likely to interact with many others).

[2] Sometimes the opposite scenario might be true: the team is consisting mostly of people which do not have enough expertise, whereas the architect has long experience with the code base. In this situation, the architect can be helping the team by sketching a design approach and sharing his knowledge. Still, there should be room for questions and further ideas from the team.


File download with ClojureScript

As I couldn’t find a recipe on how to provide some data from a ClojureScript application for download, here’s how. If you know how to do this in JavaScript already and if you’ve done any CLJS-JavaScript interop, there’s nothing new for you to learn here, as this is a pretty straight-forward translation of how to use the Blob API and clicking a temporary link in JavaScript,

(defn file-blob [datamap mimetype]
  (js/Blob. [(with-out-str (pp/pprint datamap))] {"type" mimetype}))

(defn link-for-blob [blob filename]
  (doto (.createElement js/document "a")
    (set! -download filename)
    (set! -href (.createObjectURL js/URL blob))))

(defn click-and-remove-link [link]
  (let [click-remove-callback
    (fn []
      (.dispatchEvent link (js/MouseEvent. "click"))
      (.removeChild (.-body js/document) link))]
    (.requestAnimationFrame js/window click-remove-callback)))

(defn add-link [link]
  (.appendChild (.-body js/document) link))

(defn download-data [data filename mimetype]
  (-> data
       (file-blob mimetype)
       (link-for-blob filename)
       add-link
       click-and-remove-link))

(defn export-data []
  (download-data (:data @some-state) "exported-data.txt" "text/plain"))

I’ve tried to break it down into pretty self-explaining pieces, but here is a bit of explanation: export-data would be used as an on-click handler on some UI element and would expect to gather the data to be exported in some way. Here, we’re just assuming the data is already stored in some state-atom and is a map. file-blob is pretty-printing the data somewhat, declaring the content to the given MIME type (e.g. text/plain) and then returning the newly created blob. Of course, you might want to change the pretty printing or the MIME type, depending on your data.

We’re not doing anything fancy with the created Blob object, we simply hand it over to URL.createObjectURL whose result we use as the href attribute of a newly created anchor element. Setting the download attribute will tell the browser not to navigate to the URL. After this we simply add this new link to the DOM and then execute the download by dispatching a MouseEvent on said link. This actually triggers the download, i.e. the browser will open up a file save dialog with the suggested name, so the only thing left to do is to clean up the link from the DOM.


Fun with function signatures

In a blog post on dependency inversion in Clojure I’ve discussed what this DI principle actually is about and the solutions Clojure offers to support it. There is one aspect that bugged me a little: For me, a fundamental challenge with DI in a language like Clojure is that you often have simple functions depending on simple other functions (simple here in contrast to protocols). In the article linked to above I discussed one way of resolving function dependencies by using an indirection over a service locator. However, in practice writing service locator lookups by hand was getting tiresome soon. So instead I decided to have some fun throwing together some macros that handling function signatures, which resulted in a small library.

Enter funsig

funsig shoots lower than Clojure protocols: it provides dependency management on a per-function level. What this means is simply that you can define a function signature with defsig and then provide implementations with defimpl. Implementations will depend on the signature. Let’s say we have some application code that depends on a printer function:

(ns my.onion)

(defn printer [string]
    (println string))

(defn print-account-multiplied [account multiplier]
    (let [result (* account multiplier)]
        (printer result)))

One might want more flexibility on how and where to print. In other words, one might want the application code (print-account-multiplied) to depend on an abstraction (printer) only and not on the concrete implementation as in this example.

Funsig allows you to inverse the dependency of the printer implementation. You would define the signature with defsig and have the appplication code depend on the signature like this:

(ns my.onion
    (:require [de.find-method.funsig :as di :refer [defsig defimpl]]))

(defsig printer [string])

(defn print-account-multiplied [account multiplier]
    (let [result (* account multiplier)]
        (printer result)))

You can then provide the implementation with defimpl:

(ns my.onion.simle-printer
    (:require [de.find.method.funsig :as di :refer [defimpl]]
              [my.onion :as mo :refer [printer]]))

(defimpl printer [string]
    (println string))

Note that the implementation has a dependency on the signature, not the other way around. Also, application code (print-account-multiplied) simply depends on the signature — here the signature is in the same file, but reference to the var in another namespace (i.e. using require\:refer) also works normally. For application code, this looks like dependency injection.

funsig will also allow you to have multiple implementations and then select the one you want.

Have fun!

You’ll find all the gory details in the README and / or in the intro document. The latter also explains the relationship to the service locator pattern mentioned above.

Feedback welcome!

Categories: Lisp
Defined tags for this entry:

Technical debt equals missed quality requirements

Technical debt assumes that you have an existing system and you know already about the areas where the duct tape is becoming thin. Let’s discuss the problem, the conflict around technical debt a little. It’s basically always the same battle: the guys maintaining the system probably have good reasons why they want to pay back on the accumulated technical debt, whereas the product owner believes more functionality is more important.

This is a typical clash of interests between people from different tribes and usually from different departments: the developers report to a technical manager, the product owner to a business guy. The IT manager is typically under pressure to minimize costs, in particular costs for support and maintenance, so he’s interested in paying back on technical debt during projects. The business manager, however, is under pressure to convince new clients with new features. So, that there is a difference in how technical and non-technical people set priorities between technical debt and new features doesn’t come as a surprise: naturally, to the business side this is all about technical issues which they don’t consider to be their concern. It’s only natural that they believe that tackling such technical issues should be solved by the technical folks “somehow”. After all, it were these guys which build the system with all these problems in the first place, and now they want to spend more time and money on this? Although I’ve exaggerated here a little maybe and this line of argument is way too simplistic (given that often debt is accumulated over time), it’s still quite popular. Such product owners seem to believe that they are only responsible for developing new functionality. Corporate culture can contribute to this: some companies have a culture where every new glitter is taken for a star and gets way more attention than the cash cow functionality that keeps existing clients happy.

So some product owners will see technical debt as a separate issue which needs to go on a maintenance budget, a budget that somebody else is responsible for, e.g. the IT department. The problem with that idea is that in a culture where building new features is the only thing that counts, typically no one ever gets around to clean up the technical debt. All what will happen is that you’ll have some people that fix bugs with this maintenance budget. So you don’t actually ever spend the money on technical debt but only on the results of it which of course doesn’t address the root of the technical issues. Also, if you finally do get an approval for a refactoring project, these are really the most boring and horrible types of projects to work on. So it’s not likely to see a lot of happy, motivated people on such projects — and sure enough the most competent people are probably assigned to more valuable projects anyway. As a result and because there is no business pressure on such refactoring projects, as soon as something else comes up, such refactoring initiatives are abandoned or at least down-sized to a minimum. Peter Seibel describes in a recent, lovely written article the need for an Engineering Effectiveness team at Twitter in which he lays out in some details how hard it is to really put up with technical debt.

Let’s drill down on technical debt from the point of view of requirements. Wait a minute, this doesn’t sound right — how could technical debt be a requirement? Obviously, it isn’t, but there is a close relation. Technical debt can come in different flavors: e.g. you really should install a second machine and a load balancer so you’re prepared for failure or we really need to rewrite module foobar.clx to finally get rid of all the spaghetti code that’s slowing us down to oh, we really need to support responsive design, we’re getting more mobile users every day. Now, if you take a look at these three simple examples, they are related to quality requirements: the first is about availability, the second about maintainability and the last about usability. Take a look at the ISO 25000 standard for software product quality page listing all the quality attributes one might want to consider. Technical debt is always tied to some quality that your system should offer, but doesn’t.

A reason for this is that quality attributes are often not made explicit during requirements gathering, regardless of whether you’re following an agile approach or not. The less technical the product owner, the more often they seem to assume that performance, scalability, security and the other -ilities will come out just right by magic. Technicians, on the other hand, know perfectly well that they will not. It’s worse if they don’t that: they will build something that might fulfill the functional requirements but not the non-functional requirements. “Works on my machine” might be fine for a naive developer, but nobody will be happy if takes 5 minutes to load the page for the gazillions of parallel requests in production. There is a reason why software architecture is mostly concerned with quality, if you don’t plan and build for scalability, it’s unlikely you end up with a highly performing system. Take a look at the picture: this bridge wasn’t planned to cope with the amount of traffic it sees nowadays, some time in the 80s somebody decided it would be okay to have three lanes instead of two and didn’t think through the long term consequences.

Rheinbrücke A40 Duisburg-Neuenkamp
The bridge of the highway A40 over the Rhein in Duisburg. It is damaged so bad that trucks are no longer allowed to cross it.
Picture made by kaʁstn Disk/Catstitched by Daniel Schwen, licensed under CC BY-SA 3.0 de over [Wikimedia Commons]( https://commons.wikimedia.org/wiki/File:Rheinbruecke_Neuenkamp_pano.jpg#/media/File:Rheinbruecke_Neuenkamp_pano.jpg)

Of course, technical debt can also accumulate over time, one might rightly point out: sure, one machine was enough for the requirements at the start, but nobody followed up on the increase in users and nobody cared (or had time) to clean up the code in module foobar.clx back then (nor in the following six years of quick bug fixes and minor one-off patches that have grown on it like leeches). This is a sign that nobody actively had an eye on how the world in and around the systems changes and to take action early on. For code quality, Uncle Bob points to the Boy Scout rule which says that you should always clean up (regardless of who messed it up) — another way of trying to ease maintainability by paying back on technical debt every day.

The over-aching point here is that you, as a technician and you, too, as a product owner need to think about the quality requirements of your system and to do that over the entire life cycle. What was a fitting solution at a time might turn into technical debt over time. This means means your system now doesn’t hold up to the quality requirements you and your clients needs and no matter what was the cause for it, you’re better of fixing it now than accumulate even more technical debt.

Isaac Sacolick describes in his article on How to get an agile product owner to pay for technical debt how to address the common problem of technical debt heads on, mostly from a managers perspective via process and by making people (mostly the technical lead) responsible. However, I think it makes a lot more sense to try to ensure a common understanding between the developers and the product owner. One way to go about this is to use quality scenarios: as a member of the dev team, ask your product owner how many users the system needs to handle. Ask her if she thinks it’s okay if somebody might find a security issue earlier than you. Or if she thinks it’s okay when a seemingly small change will take three weeks because the code is an intangible mess that nobody understands since Dieter left? These sort of questions hopefully open up to discussions based on business value, ROI and cost of delay. You are then using the language product owners understand and you might also learn something along the line (no we expect so many users to do X in the future, so investing in module Z doesn’t make sense).

Now, granted that doesn’t always work. An old colleague of mine always denounced the old system he worked on as being exactly the way it was ordered. Isaac’s article might offer some advice here, of which the most important one is probably that paying back technical debt is way easier if you have a CIO or someone with similar power supporting you.


In-memory database fixtures with Clojure and sqlite

For a recent project, I need to extract data from a sqlite3 database. Writing the Clojure code to retrieve data was very straight-forward with clojure.java-jdbc and java-jdbc/dsl. Naturally, I wanted to have some tests for this code as well. In a previous Python project, I had a lot of fun using sqlite’s in-memory feature to run very speedy database tests, so of course I wanted this for my current Clojure project, too. This turned out not to be so easy as I had expected though, so I’m documenting it here for the next naive soul. My initial attempt with clojure.java.jdbc, java-jdbc/dsl and midje looked basically like this:

    (def testdbspec
      {:subprotocol "sqlite"
       :subname ":memory:"})

    (defn make-bookmark-table []
      (jdbc/with-transaction [db testdbspec]
         (jdbc/db-do-prepared db
           (ddl/create-table :bookmarks
                      [:id :int :primary :key]
                      [:type :int]
                      [:title "longvarchar"]))))

    (defn add-bookmark []
      (jdbc/with-transaction [db testdbspec]
         (jdbc/db-do-prepared db
           (str 
              "INSERT INTO bookmarks (id, type, title) "
              "VALUES ('12453', '2', 'a bookmark')"))))

    (defn setup-database [db]
       (make-bookmark-table)
       (add-bookmark)

    (facts "Testing database access to bookmarks"
       (with-state-changes [(before :facts (setup-database))]
            (fact "We can retrieve a list of bookmarks"
                (fetch-tags :dataspec testdb) => [{:title "a bookmark"}]))))

This will fail quite early, because basically as soon as the with-transaction in make-bookmark-table has finished its work, the connection to the database will be closed. As a result, when the next with-transaction or jdbc\query is run, you’ll connect to a fresh in-memory database which doesn’t have the tables you just created. My old Python test code didn’t have this problem, because the setUp method of the TestCase would create the database connection (via sqlalchemy’s create_engine) and would keep it alive until the TestCase tearDown method would run.

I tried giving back the database connection from make-bookmark-table, but this just results in a “connection closed” error. Unfortunately, clojure.java.jdbc doesn’t support opening and closing the connection yourself. Sure, you can use get-connection, but you can’t feed this into either with-transaction or query. query uses with-open internally, which will conveniently close the connection for you. In a post on the perils of dynamic scope Stuart Sierra calls this the Dynamically-Scoped Singleton Resource and files it under ‘anti-pattern’. I gotten bitten pretty exactly by what Stuart describes: when dealing with sqlite’s in-memory feature, we would like to manage the connection ourselves, but we can’t.

After banging my head against this for a while, the only option I could come up with resorts to extract the relevant with-transaction from the setup code. Instead you have to wrap the tests with the transaction and then call the setup code, like this:

    (defn make-bookmark-table [db]
      (jdbc/db-do-prepared db
           (ddl/create-table :bookmarks
                      [:id :int :primary :key]
                      [:type :int]
                      [:title "longvarchar"])))

    (defn setup-tables [db]
       (make-bookmark-table db))

    (defn add-bookmark [db]
       (jdbc/db-do-prepared db
           (str 
              "INSERT INTO bookmarks (id, type, title) "
              "VALUES ('12453', '2', 'a bookmark' )")))

    (defn remove-bookmark [db]
       (jdbc/db-do-prepared db
            (str "DELETE FROM bookmarks WHERE id = '12453")))

    (facts "Testing database access to bookmarks"
       (jdbc/with-db-transaction [db testdbspec]
            (setup-tables db)
            (with-state-changes [(before :facts (add-boomark db))
                                          (after :facts (remove-boomark db))]
                 (fact "We can retrieve a list of bookmarks"
                     (fetch-tags :dataspec db) => [{:title "a bookmark"}]))))

This works as expected.


Dependency inversion in Clojure

The problem

I was recently reading a nice German book on Effective Software Archictecture by Gernot Starke and stumbled upon a discussion of the dependency inversion principle, which got me thinking. Gernot Starke first discusses the problem with an allusion to traditional procedural programming (translation mine):

Classical designs for procedural languages show a very characteristic structure of dependencies. As (the following figure) illustrates, these dependencies typically start from a central point, for instance the main program or the central module of an application. At this high level, these modules typically implement abstract processes. However, they are directly depending on the implementation of the concrete units, i.e. the very functions. These dependencies are causing big problems in practice. They inhibit changing the implementation of concrete functions without causing impacts on the overall system.

Classical dependencies in procedural systems He then goes on to introduce the idea of programming against abstractions and introduces the idea of the dependency inversion principle, first coined in Bob Martin’s DIP article (see also another thorough discussion in Brett Schucherts article on DIP). Basically, the idea is that the integrating process refers only to abstractions (i.e. interfaces) which are then implemented by concrete elements (classes), cf. the next figure.

Integrate with abstractions When I take a look at some of my recent Clojure code or at some older code I’ve written in Common Lisp, I immediately recognize dependencies that correspond to those in a classical procedural system. Let’s go for an example and take a look at one specific function in kata 4, data munging:

(ns kata4-data-munging.core
    (:require [kata4-data-munging.parse :refer [parse-day]]
              [clojure.java.io :as 'io]))

(defn find-lowest-temperature
    "Return day in weatherfile with the smallest temperature spread"
    [weatherfile]
    (with-open [rdr (io/reader weatherfile)]
         (loop [lines (line-seq rdr) minday 0 minspread 0]
        (if (empty? lines)
            minday
            (let [{mnt :MnT mxt :MxT curday :day} (parse-day (first lines)) ;<-- dependency!
              curspread (when (and mnt mxt) (- mxt mnt))]
            (if (and curday curspread
                  (or (= minspread 0)
                  (< curspread minspread)))
               (recur (next lines) curday curspread)
               (recur (next lines) minday minspread)))))))

The dependency here is on the concrete implementation of parse-day, you can basically ignore the rest for the argument here. Given that this was a small coding kata, this is not unreasonable (and in the course of the kata, the code changes to be more general), but the issues here are obvious:

  • if we would like to parse a weather-file with a different structure, we have to change find-lowest-temperature to call out to a different function,
  • if the result of the new function differs, again we have to change the implementation of find-lowest-temperature,
  • we also have to change the namespace declaration, i.e. we probably want to require a different module.

Clojure’s built-in solutions

The application of the dependency inversion principle is typically shown in the context of object-oriented programming languages, like Java where you use interfaces and classes implementing those interfaces for breaking the dependency on concrete implementations, cf. the figure above again. But as we’ll see the principle can be applied independently of object-orientation. I’ll discuss higher-order functions, protocols and multimethods as potential solutions.

Higher order functions

For starters and probably painfully obvious is to make use of the fact that Clojure treats functions as first-class objects and supports higher-order functions. This simply means that we can pass the parsing function as an argument to find-lowest-temperature.

(defn find-lowest-temperature
    "Return day in weatherfile with the smallest temperature spread"
    [weatherfile parsefn] ; <-- function as parameter
    (with-open [rdr (io/reader weatherfile)]
         (loop [lines (line-seq rdr) minday 0 minspread 0]
        (if (empty? lines)
            minday
            (let [{mnt :MnT mxt :MxT curday :day}  (parsefn (first lines))
              curspread (when (and mnt mxt) (- mxt mnt))]
            (if (and curday curspread
                  (or (= minspread 0)
                  (< curspread minspread)))
               (recur (next lines) curday curspread)
               (recur (next lines) minday minspread)))))))

This way, we can simply call (find-lowest-temperature "myweatherfile" parse-day) and freely substitute whatever file format and accompanying parse function we need. What does this buy us?

  • We no longer have to modify find-lowest-temperature when we want to use a different parse-day function.
  • The namespace containing find-lowest-temperature also no longer requires the (namespace containing the) parse function.

But there is also a down-side: find-lowest-temperature assumes that all parsing functions it will get fed adhere to a signature that is entirely implicit: parsefn needs to take exactly one line and needs to return a map with given key-names. Higher-order functions don’t provide a solution for this per-se, so in order to solve the implicit signature issue we need to look elsewhere. This is nothing Clojure specific: Assuming you’ve passed in an object either as a method parameter or via Setter-Methods or Constructor-Injection (cf. dependency injection), Python’s or Ruby’s duck-typing basically works the same way: the caller of a method simply assumes that the callee offers a method with the right signature. It is the responsibility of the caller (of find-lowest-temperature) to provide a matching function for parse-fn.

However, this actually amounts to just move the problem from one level to the next: now some other level has to decide which concrete parse function needs to be used. This next level will have again the exact same problems: it will depend on both concrete implementations of find-lowest-temperature and parse-day (or any other parse function). If you think this through, it’s obvious that in general at one point or another, you have code that determines which function to call and which parameters to use. The question is only if we can use abstractions or whether we have to use concrete implementations. We’ll return to this issue, that now at some other level you need to handle the problem, later.


Coding katas Clojure -- Trigrams

Kata 14 is a seemingly simple one that is concerned with, as Dave Thomas puts it, “the heuristics of processing” texts, using trigrams to produce (more or less) random new texts.

Trigrams are not a new concept for me. Although the underlying concept is simple, they can be used for many interesting applications. Trigrams are a special case of N-grams, where N=3 turns out to be especially useful (as in “giving better results as other values for N”) for natural language processing, at least for western languages. Nearly a decade ago, I had the pleasure to collaborate with some rather smart people who used trigrams to identify “matching” text snippets between dictionary entries. The idea was similar to what is described in this article on using trigram vectors and nearest neighborhood calculation for document classification.As I’m generally interested in NLP and not only in doing coding katas, I will mainly focus on the trigram aspects in this kata, not so much on the random text generation part.

If you followed the link to the Wikipedia article, it’s clear from the kata description that we need word-level trigrams, not character-level trigrams. The kata description also already reveals the data structure to use for solving the task, a HashMap and the algorithm is also described in enough detail to be straight-forward.

Let’s augment the kata a little and decompose the tasks:

  1. split some string into n-grams with default n=3, where we might want to apply different criteria to apply on where we can split the string (e.g. after each character or after each word)
  2. parse a file into n-grams, where we need to consider sentence boundaries
  3. parse a collection of files into n-grams concurrently (just to speed up parsing of a larger file collection and also to introduce another possibility to learn a little more about Clojure’s specific tools to handle concurrency)
  4. do some analysis on the trigrams found in the recommended Tom Swift and his aircraft text
  5. modify the n-gram computation to yield the “first two words as key / list of all third words value” map described in the kata
  6. build a lazy-seq version of the text generation algorithm (because, as the example in the description already shows, there might be circles which could lead to infinite results)
  7. maybe implement the nearest neighborhood classification scheme described in the paper linked just for fun

But first things first: let’s parse some string into trigrams. This, first of all, requires tokenization. As a first obvious naive idea, we start out with simple string splitting, using clojure.string. First let’s split on all whitespace #"\s", using the first sentence in the Tom Swift text:

kata14-trigrams.core> (str/split "Are you all ready, Tom?" #"\s")
["Are" "you" "all" "ready," "Tom?"]

This already shows the issues surrounding punctuation that Dave Thomas mentions in the kata description. Basically, we have to consider what we want to do with sentence boundaries. Fortunately, we’re ultimately using Java’s Pattern class, so we can also match (or split) on punctuation, although probably not on all punctuation, but only on those which signify a sentence boundary (i.e. the charset [.!?] followed by either whitespace #"\s+" or end of line $):

kata14-trigrams.core> (str/split "Are you all ready - Tom?" #"\s*\p{Punct}\s*")
["Are you all ready" "Tom"]
kata14-trigrams.core> (str/split "Are you all ready, Tom?" #"\s*[.?!](\s+|$")
["Are you all ready, Tom"]
kata14-trigrams.core> (map #(str/split %1 #"\s+")
                      (str/split "Are you all ready, Tom? I want to go."
                        #"\s*[.!?](\s+|$)"))
(["Are" "you" "all" "ready," "Tom"] ["I" "want" "to" "go"])

This still leaves the question open of what we want to do with the comma or any other interleaving punctuation. It’s clear that we want to get rid of it somehow, but it’s not too clear whether we would like to see “Tom” as a valid consecutive element in the text generation part. Probably not, so an idea here would be to try to make the remaining punctuation elements visible as separate tokens.

Let’s put this issue aside and move on to the actual n-gram generation. Quite obviously, “computing” an n-gram is really just a simple sequence operation: you move through the sequence, always taking n elements as needed, until you’re done. This is completely straight-forward to accomplish with a simple accumulator (acc) to collect the results that we take while looping through the sequence. (Code is on github, as always.)

(defn ngram
   "Given a sequence sq and a number n, returns a sequence of new contiguous sequences of n items that appear in sq."
   ([squence n]
     (ngram squence n []))
   ([squence n acc]
         (if-let [sq (seq squence)]
             (recur (rest sq) n (conj aux (take n sq)))
             acc)))

Given that we might want to run this on longer strings (texts, books), it makes sense to make this lazy by wrapping the call to the accumulator version in a lazy sequence.

 kata14-trigrams.core> (ngram [1 2 3 4 5 6] 3)
 ((1 2 3) (2 3 4) (3 4 5) (4 5 6) (5 6) (6))
 kata14-trigrams.core> (realized? (ngram [1 2 3 4 5 6] 3))
 false
 kata14-trigrams.core> (take 2 (ngram [1 2 3 4 5 6] 3))
 ((1 2 3) (2 3 4)

Okay, let’s combine this with our clojure.string/split experiments:

kata14-trigrams.core> (map #(str/split %1 #"\s+")
                      (str/split "Are you all ready, Tom? I want to go."
                        #"\s*[.!?](\s+|$)"))
(["Are" "you" "all" "ready," "Tom"] ["I" "want" "to" "go"])
kata14-trigrams.core> (map #(ngram %1 3) *1)
((("Are" "you" "all") ("you" "all" "ready,") ("all" "ready," "Tom") ("ready," "Tom") ("Tom"))
 (("I" "want" "to") ("want" "to" "go") ("to" "go") ("go")))

Okay, this looks like we’ve basically have everything we need in hand, now let’s make it a little bit more formal. First of all, the tokenization step. So far, we have basically done two things in one step, sentence boundary detection and in-sentence tokenization. I already hinted at the need to do further work on the in-sentence tokenization wrt. punctuation and there might be other steps that we might want to add in the future, for instance, stemming or further morphological analysis. I’ll not go in the direction of a more thorough tokenization method, which would require to go beyond regular expressions for many languages, but at least let’s communicate the intention of how the tokenize method works clearly.

(fact "Tokenize an input string, splitting sentences along the way"
      (tokenize "Are you ready, Tom? I want to go.") => '(("Are" "you" "ready" "," "Tom")
                                                                  ("I" "want" "to" "go")))

(defn tokenize
  "Tokenize a string"
  [string]
  (-> string
     (split-sentences)
     (tokenize-sentences)))

This is basically the top-level function for tokenizing an incoming string, threading the result of splitting sentences into a tokenization function. Let’s take a look at the details, which shouldn’t be surprising at all. First we have split-sentences, which is basically str/split on sentence end markers. Then we have split-on-whitespace, which we’ve also already seen. split-off-punctuation is basically handling all punctuation not used up during sentence boundary detection, which we will want to keep. And then we have two wrappers tokenize-sentence(s) which do nothing more than handling mapping over the various bits and pieces. This concludes the tokenization step, phew.

(fact "Split sentences in a string"
      (split-sentences "Are you ready, Tom? I want to go.") => ["Are you ready, Tom" "I want to go"])

 (defn split-sentences
   "Split a string into a sequence of sentences"
   [string]
   (str/split string #"\s*[.!?](\s+|$)"))

    (fact "Splitting on whitespace"
     (split-on-whitespace "Are you  ready") => ["Are" "you" "ready"]
     (split-on-whitespace "Are") => ["Are"])

(defn split-on-whitespace
  "Take a string and split it's content on whitespace, removing the whitespace"
  [string]
      (str/split string #"\s+"))

(fact "Splitting but keeping punctuation if any"
     (split-off-punctuation "ready,") => ["ready" ","]
     (split-off-punctuation "ready") => ["ready"]
     (split-off-punctuation "!+#?") => [""])

(defn split-off-punctuation
   "Take a string and split it's content, keeping punctuation as new tokens"
  [string]
  (let [match (re-find #"(\w+)(\p{Punct})?" string)
            result (rest (keep identity match))]
     (if (seq result)
    result
    (vector ""))))

(fact "Tokenize a sentence"
      (tokenize-sentence "Are you ready, Tom?") => '("Are" "you" "ready" "," "Tom" "?"))

(defn tokenize-sentence
  "Take a single sentence and return a sequence of tokens for it"
  [sentence]
  (flatten (map split-off-punctuation
               (split-on-whitespace sentence))))

(fact "Tokenize some sentences"
       (tokenize-sentences ["Are you ready, Tom?" "I want to go."]) => '(("Are" "you" "ready" "," "Tom" "?")
                                                                         ("I" "want" "to" "go" ".")))

(defn tokenize-sentences
   "Take a sequence of sentences and return a sequence of tokens for  each sentence"
  [sentences]
  (map tokenize-sentence sentences))

When combining this with the ngram function, this is already pretty close to what we’ll need to solve the original kata, although we will need some further adjustment to the data structure, which I’m going to tackle later.

kata14-trigrams.core> (map #(ngram %1 3)
                       (tokenize "Are you ready, Tom? I want to go."))
((("Are" "you" "ready") ("you" "ready" ",") ("ready" "," "Tom") ("," "Tom") ("Tom"))
(("I" "want" "to") ("want" "to" "go") ("to" "go") ("go")))

So, let’s move to the next part which is generating ngrams for an entire file. First of all, I think that again we better do this in a lazy fashion, no need to do lots processing of huge files that might not be completely necessary. Looking back we can see that for any given string tokenize processes the same string at least thrice: we’re first splitting on sentence boundaries, then handle punctuation and finally split on whitespace. If you think about reading files, it’s quite obvious that the readLine method of java.io.BufferedReader which is behind Clojure’s line-seq is also processing buffers quite similarly looking for line ends to split on. Maybe, we can combine some of the work? Let’s start out with figuring out how to process a file char by char lazily. An answer to a stackoverflow question on processing files per character in Clojure strictly follows line-seq:

(defn char-seq
  [^java.io.Reader rdr]
   (when-let [chr (.read rdr)]
      (if (>= chr 0)
          (cons (char chr) (lazy-seq (char-seq rdr))))))

This is a start but not too helpful as discussed in this other stackoverflow thread on processing large text files, as the result of line-seq and char-seq is a cons and the lazy part of it doesn’t help you much when you’re not processing the file right away. Instead one might want to return a lazy sequence of results, closing the file only afterwards. This could like this:

; cf. https://stackoverflow.com/questions/4118123/read-a-very-large-text-file-into-a-list-in-clojure/10462159#10462159
(defn lazy-file-chars [file]
   (letfn [(lfl-helper [rdr]
              (lazy-seq
               (if-let [chr (.read rdr)]
                      (when (> chr 0)
                          (cons (char chr) (lfl-helper rdr)))
                      (do (.close rdr) nil))))]
      (lfl-helper (clojure.java.io/reader file))))

When you look at this simple piece of code, besides reading characters from disc and building up a lazy-seq, it’s also a) doing a sanity check on the input and b) building up a particular structure to return. Sounds exactly like the hooks we might want to consider for parsing sentences on read. Let’s rip the code apart and combine it with the guts of split-sentences (matching explicitly on characters instead of using regular expression character classes):

 (defn read-next-sentence [rdr aux]
    (if-let [chr (.read rdr)]
       (let [character (char chr)]
         (cond (= \. character) aux
                       (= \? character) aux
                       (= \! character) aux
                       (= \tab character) (recur rdr (conj aux \ ))
          :else (recur rdr (conj aux character))))
    aux))

(defn file-sentences [file]
     (letfn [(lfs-helper [rdr]
                 (lazy-seq
               (if-let [sentence (seq (read-next-sentence rdr (vector)))]
                 (cons (apply str sentence) (lfs-helper rdr))
                 (do (.close rdr) nil))))]
                (lfs-helper (clojure.java.io/reader file))))

read-next-sentence has some obvious deficiencies: it now splits sentences on every occurrence of .?!, not only on those occurrences which are followed by whitespace. Second, it should handle (only) multiple occurrences of \return\newline characters (CRLF) as sentence delimiters, too. Solving both of these issues requires to go in the direction of real parsers where we would have to see aux as a stack of previously read characters. And we might not only want to deal with tabs specially (turning them into a space), e.g. we might want to replace multiple spaces/tabs into a single space etc. I’ll just draw a sketch here that we might want to elaborate further:

(fact "Test for sentence end"
      (sentence-end-p \space   [\g \o \.])                    => true
      (sentence-end-p \space   [\r \e \a \d \y \?])           => true
      (sentence-end-p \newline [\y \return \newline \return]) => true
      (sentence-end-p \newline [\y \newline])                 => true
      (sentence-end-p "B"      [\.])                          => false
      (sentence-end-p \newline [\y \return])                  => false
      (sentence-end-p \newline [\y])                          => false)

   (fact "Parse result for characters depends on previous reads"
      (next-char-result \space   [\g \o \space])  => [\g \o \space]
      (next-char-result \tab     [\g \o])         => [\g \o \space]
      (next-char-result \tab     [\g \o \space])  => [\g \o \space]
      (next-char-result \tab     [\g \o \tab])    => [\g \o \space]
      (next-char-result \return  [\g \o])         => [\g \o]
      (next-char-result \newline [\g \o \return]) => [\g \o \space]
      (next-char-result \newline [\g \o])         => [\g \o \space])

(defn sentence-end-p [character charstack]
    (cond (and (= character \space)
                       (some (partial = (peek charstack)) [\. \? \!])) true
                (and (= character \return)
                       (some (partial = (peek charstack)) [\. \? \!])) true
                (and (= character \newline)
                       (some (partial = (peek charstack)) [\. \? \!])) true
                (and (= character \newline)
                       (or (= (peek charstack) \newline)
                           (and (= (peek charstack) \return)
                                (= (peek (pop charstack)) \newline)))) true
                :else false))

(defn next-char-result [character charstack]
     (cond (and (empty? charstack)
                (or (= character \space)
                                (= character \tab)
                                (= character \newline)
                                (= character \return))) charstack
                 (and (= character \space)
          (= (peek charstack) \space))  charstack
         (and (= character \tab)
          (= (peek charstack) \space))  charstack
             (and (= character \tab)
              (= (peek charstack) \tab))  (conj (pop charstack) \space) ; should never happen
          (= character \tab) (conj charstack \space)
                      (= character \return) charstack
                 (and (= character \newline)
                      (= (peek charstack) \space)) charstack
                 (and (= character \newline)
                      (= (peek charstack) \return)) (conj (pop charstack) \space) ; should never happen
                      (= character \newline) (conj charstack \space)
                 :else (conj charstack character)))

I’ll leave it at that, although it’s clear that we can and probably should extend it in many different ways. Here are the adapted functions to use these:

(defn read-next-sentence
   ([rdr]
       (read-next-sentence rdr (vector) (vector)))
   ([rdr seen result]
       (let [chr (.read rdr)]
           (if (and chr
                    (>= chr 0))
               (let [character (char chr)]
                   (if (sentence-end-p character seen)
                    result
                        (recur rdr (conj seen character)
                           (next-char-result character result))))
               result))))

(defn read-sentences [x]
    (letfn [(lfs-helper [rdr]
                (lazy-seq
                (if-let [sentence (read-next-sentence rdr)]
                    (cons (apply str sentence) (lfs-helper rdr))
                    (do (.close rdr) nil))))]
        (lfs-helper (clojure.java.io/reader x))))

The result is here that we now have a read-next-sentence function which just reads (non-lazily) and a (local) helper function which uses it to build up lazy sequence of sentences. Let’s test it briefly:

kata14-trigrams.core> (pprint 
                     (map #(ngram %1 3) 
                  (tokenize-sentences 
                  (take 2 
                        (read-sentences test-file)))))
((("The" "Project" "Gutenberg")
 ("Project" "Gutenberg" "EBook")
 ("Gutenberg" "EBook" "of")
 ("EBook" "of" "Tom")
 ("of" "Tom" "Swift")
 ("Tom" "Swift" "and")
     ("Swift" "and" "his")
     ("and" "his" "Airship")
  ...

Although one would probably now integrate more functionality from tokenize-sentences into read-next-sentence, I’ll won’t elaborate this now and see task 2 as solved. As a side note, this looks as if it’s only restricted to files now, but it really isn’t, as clojure.java.io/reader will happily accept StringReader arguments:

kata14-trigrams.core> (import java.io.StringReader)
java.io.StringReader
kata14-trigrams.core> (take 2 (read-sentences (StringReader. "This is a sentence. And another one")))
("This is a sentence." "And another one")

Content-type negotiation and method dispatch in Clojure

Christophe Rhodes’ post on http-content-negotiation and generalized specializers in CLOS (Common Lisp Object System) made an ugliness in a small Clojure web application jump right into my face. I’m using liberator to setup so-called resources (side-note: While this post assumes some familiarity with liberator, the main aspects is actually multi-method handling in Clojure — I hope it’s useful even if you don’t know or care about liberator). Resources are serving as ring handlers (typically used with compojure) and are used to deal with most aspects of request handling in a fairly declarative manner, including content negotiation. Liberator provides decision points and handlers, moving a so-called context around between the various functions that you need to associate with resources — map-like data returned from a decision function will be merged with the existing context. So far, so good. The bad part, however, was that I used a single resource definition for providing multiple media types. More exactly my code / resource definition has an anonymous handler function which uses a simple value check to serve the correct media type (we’re talking about the Accept-Header of the incoming request, cf. RfC 2616, Sec. 14.1, like this (as you can imagine, that’s a somewhat simplified version):

 (defresource users
   :available-media-types ["text/html" "application/json"]
   :method-allowed? (request-method-in :get)
   :exists? (fn [context]
                {:users (find-users)})
   :handle-ok (fn [context] 
                  (let [media-type
                       (get-in context [:representation :media-type])]
                    (condp = media-type
                        "application/json"
                          (generate-string (get context :users))
                        "text/html"
                          (usersview (:users context)))))
   :handle-not-found (fn [context]
                         (let [media-type
                              (get-in context [:representation :media-type])]
                            ;; TODO: Handle not found for HTML
                            (condp = media-type
                                "application/json"
                                   (generate-string {:error "No such user"})))))

From a functional point of view, there is not much wrong with this. It’s very close to the description in the relevant part of the liberator tutorial on content negotiation. But from an aesthetical point of view, the condp expressions to determine finally how to present the resource data is plainly ugly. To get rid of this ugliness, the inspiration I took from Christophe’s article is to rely on Clojures method dispatch (which is the simple part from Christophe’s post only). The idea is straight-forward: Instead of using a simple anonymous function which convolutes two different media-types, introduce a multi-method like users-handle-ok that dispatches on media-type: We can simply define a dispatch method (via defmulti), moving the code which determines the accepted media-type (e.g. “application/json”). This value is then used by Clojure to determine the right method to use.

(defmulti users-handle-ok 
  "Handle OK for users resource for different media-types"
  (fn [context]
  (get-in context [:representation :media-type])))
(defmethod users-handle-ok "application/json" [context]
  (generate-string (get context :users)))
(defmethod users-handle-ok "text/html" [context] 
(usersview (get context :users)))

;; some code elided here ...

(defresource users
  :available-media-types ["text/html" "application/json"]
  :method-allowed? (request-method-in :get)
  :exists? #(users-exists? %)
  :handle-ok #(users-handle-ok %)
  :handle-not-found #(users-handle-not-found %))

From a clean code perspective, this has two benefits: we now have mainly code left which does one thing at a time (SRP), which is what we should aim for and which makes unit testing also somewhat easier and more to the point. It also slims down the amount of code in the resource definition considerably. It’s now much more obvious that the resource definition is (from the application developer point of view) not much more than an integration point for different other functions.

Of course, we can use a similar approach for all of the other methods as well. Let’s assume that I have a resource that can generate HTML and JSON, but expects that all incoming POST requests contain JSON only. This will look utterly similar to the approach above, only this time we dispatch on the request-method. If we are now POST-ing to this resource with a different Content-Type, we’ll receive a “415 Unsupported media type” reply from liberator.

(defmulti known-content-type?
  "Determine known content types depending on request-method"
  (fn [context]
  (get-in context [:request :request-method])))
(defmethod known-content-type? :post [context]
  "Allow only application/json for POST requests"
  (when-let [content-type (get-in context [:request :content-type])]
    (condp = content-type
           "application/json" true
           false)))      
(defmethod known-content-type? :default [_]
  true)

(defresource someresource
  :available-media-types ["text/html" "application/json"]
  :method-allowed? (request-method-in :get :post)
  :known-content-type? #(known-content-type? %)
  :exists? (fn [context]
       (when-let [data (find-daa)]
             {:data data}))
  :handle-ok #(handle-ok %)
  :post! #(handle-post! %)
  :post-redirect? (fn [context] {:location (url-in-context "someurl")}))

As you might guess, this method known-content-type? is probably applicable for most resources. But how would you handle the exception to the exception? This is actually quite easy, as it turns out. In line with most examples of multi-methods I’ve seen so far, we’ve used a simple value to dispatch on. But of course a map can be a value, too. Given the need to override (specialize) the method for some resource, the idea is to define the dispatch method to return a map with the request-method and the resource. We then define the methods with appropriate values. The nice thing about this is that it’s very easy to arrange for default behavior for a request method by just leaving out the resource key — the dispatch function takes precautions not to add a superfluous :resource key in case none is added to the context by the resource.

(defmulti known-content-type?
  "Determine known content types depending on request-method"
  (fn [context]
(logging/info (str "Found resource: " (:resourceclass context)))
(logging/info (str "Method: " (get-in context [:request :request-method])))
(let [dispatchval {:request-method (get-in context [:request :request-method])}]
  (if-let [resource (:resourceclass context)]
    (assoc dispatchval :resourceclass resource)
    dispatchval))))
(defmethod known-content-type? {:request-method :post} [context]
  "Allow only application/json for POST requests"
  (logging/info "Determining known content-type for :post!")
  (when-let [content-type (get-in context [:request :content-type])]
(condp = content-type
  "application/json" true
  false)))      
(defmethod known-content-type? :default [_]
  (logging/info "Determining known content-type for :default!") 
  true)

(defresource some-resource
  :available-media-types ["text/html" "application/json"]
  :method-allowed? (request-method-in :get :post)
  :known-content-type? #(known-content-type? %)
  :exists? (fn [context]
       (when-let [data (find-daa)]
             {:data data}))
  :handle-ok #(handle-ok %)
  :post! #(handle-post! %)
  :post-redirect? (fn [context] {:location (url-in-context "someurl")}))


(defresource special-resource
  :available-media-types ["text/html" "application/json"]
  :method-allowed? (request-method-in :get :post)
  :known-content-type? #(known-content-type? (assoc % :resource special-resource))
  :exists? (fn [context]
     (when-let [data (find-special-data)]
       {:data data}))
  :handle-ok #(special-handle-ok %)
  :post! #(special-post! %)
  :post-redirect? (fn [context] {:location (url-in-context "specials")}))


(defmethod known-content-type? {:request-method :post :resource special-resource} [context]
  (logging/info "Determining known content-type for :post and special-resources!")
  (when-let [content-type (get-in context [:request :content-type])]
(condp = content-type
  "application/json" true
  "application/x-www-form-urlencoded" true
  false)))

With these definitions in place, the default for POST requests using this known-content-type? method would be to accept only application/json. However, the special-resource “overrides” this behavior to also accept regular form data. Posting to the various resources will produce output like the following:

2014-04-14 15:04:31,088 [main] INFO  utils - Found resource: liberator.core$resource$fn__3268@688dbd21
2014-04-14 15:04:31,088 [main] INFO  utils - Method: :get
2014-04-14 15:04:31,089 [main] INFO  utils - Known content-type for :default!
2014-04-14 15:27:14,974 [main] INFO  utils - Found resource: liberator.core$resource$fn__3268@688dbd21
2014-04-14 15:27:14,974 [main] INFO  utils - Method: :post
2014-04-14 15:27:14,975 [main] INFO  utils - Known content-type for :post!
2014-04-14 15:04:31,127 [main] INFO  utils - Found resource: liberator.core$resource$fn__3268@688dbd21
2014-04-14 15:04:31,127 [main] INFO  utils - Method: :post
2014-04-14 15:04:31,127 [main] INFO  utils - Known content-type for :post and special-resources!

Please note that known-content-type? has to be a known symbol (defined or at least declared) prior to be usable in the resource definition, whereas adding the more specialized method requires the special-resource to be defined — declaring it won’t be enough.

Using maps as dispatch values seems to be a nice and powerful tool to know about. There are, however, still some points where I see room for improvement:

  • We would probably like to use the same mechanism for a ton of functions, always highly similar. E.g. the methods for handle-not-found and known-content-type? look highly similar on the structural level. Also, when you have multiple resources, the dispatch function for one method type (i.e. something like handle-ok) are probably always the same, so are the dispatch arguments (i.e. the media types our web application will handle). Maybe a macro would be useful here, but I haven’t thought it through yet.
  • Handling the Accept header is actually way more complicated. Fortunately, liberator takes care already of choosing the “right” media-type (cf. again the liberator tutorial on content negotiation. However, as also discussed in the same section of said tutorial, there are more negotiable parameters which might come into play, e.g. language or encoding. This quite obviously could lead to some combinatorial explosion. While the approach using a map outlined above is a way to handle it, this approach is essentially mimicking CLOS’ dispatch on multiple arguments via a single argument dispatch.
  • I haven’t even started to think about how one would approach the more advanced problem that Christophe is solving by using his MOP trickery generalized-specializers.
  • The name of the method users-handle-ok isn’t really telling. Of course, a name like users2json or serve-users-view seem better suited to describe what the respective methods are doing, but this obviously would defeat the idea of using multi-methods and the associated benefits. Still, the name should probably not be tied so close to the resource definition. Using the name parameter of methods is one way to remedy this particular issue.
  • Finally, apparently the slots of a liberator resource expect function objects. Liberator won’t just take the name of a function and do the right thing, it’ll throw an exception. Not a big deal, given that we might need to mangle the implicit argument (the context) anyway, cf. the :known-content-type?slot of the special-resource.
Categories: Lisp
Defined tags for this entry:

Coding katas Clojure -- Anagrams

Kata 6 is concerned with anagrams. An anagram is a word that consists of characters which, when combined in a different order, form a different word. Now, when I started out with this kata, I was sitting on a train without internet connection, so I just went ahead with what I remembered from a quick glance over the kata description I had done the week before. So I thought that all that needed to be solved was to determine whether two words were anagrams of each other (the complete code, btw. can be found in my github repository for the Clojure codekatas). My initial idea how to solve this is to generate the sets of the characters of both words and compare those:

(defn remove-blanks [word]
         (str/replace word " " ""))

(defn anagram-set? [word1 word2]
         (let [w1 (remove-blanks word1)
            w2 (remove-blanks word2)] 
       (= (set w1) (set w2))))

This time, I opted for using midje for running the tests, in particular due to the possibility to run tests continually via lein-midje. Midje takes a slightly different approach / syntax to writing tests, adding the notion of facts that are then verified. I.e. tests with midje look like this:

(facts "Testing the set implementation for checking anagrams"
      (fact "Set anagram can find anagrams"
         (anagram-set? "the law" "wealth") => true)
     (fact "Set anagram is too simplistic"
         (anagram-set? "the lalalaw" "wealth") => true))

You can already see from the latter fact what is wrong with the initial solution: it’s too simplistic with regard to handling the number of occurences of some character. (Some might say, the introduction of remove-blanks also is too complicated, but I wanted to handle Anne Clark’s “The law is an anagram of wealth”.

Traditional solution

When I finally had some more time to read the kata description more carefully, I recognized that the task actually is to find all anagrams of a given word, checking back against a given wordlist. So that means that the kata consists of two tasks: generate all possible combinations for a given character sequence and check in this wordlist whether some candidate character sequence amounts to a known word. Now, if you take a step back, it’s easy to see that anagrams are nothing else than permutations of the elements of a given (character) sequence, with the additional restriction that all such permutations must be (known) words again. So, we end up with a skeleton which looks like this:

(defn generate-anagrams [word]
   "Generate all anagrams of word"
   (generate-permutations word))

(defn find-anagrams [word words]
   "Finds all anagrams of word in (the sequence of) words"
   (let [anagrams (generate-anagrams word)
              wordset (set words)]
     (loop [candidates anagrams
            result []]
        (if (empty? candidates)
            result
            (recur (rest candidates)
                 (if (and (not (= (first candidates) word))
                     (contains? wordset (first candidates)))
                (concat result (list (first candidates)))
                result))))))

Which now, of course leaves us with the task to implement a permutation algorithm. I must admit I had a pretty hard time to come up with something on my own without resorting to looking at other people’s code. Given that the task of code katas is not primarily to invent algorithms on the fly, but to practice coding, I finally read the wikipedia paragraph on computing permutations in lexicographic order, which has a blue print of an algorithm which is attributed to Naranya Pandita, who invented it in the 14th century already. I took a very verbatim and top-down approach this time and ended up with this as the next piece of code:

(defn next-permutation [squence]
   (when-let [k (find-largest-index-with-bigger-successor squence)]
        (let [l (find-largest-index-bigger-value squence k)
              swapped (swap-positions squence k l)
              current-perm (reverse-tail swapped (inc k))]
          current-perm)))

(defn generate-permutations [squence]
   (let [start-perm (sort squence)]
      (loop [permutation (next-permutation start-perm)
             result (list start-perm)]
         (if (or (not permutation)
             (empty? permutation))
         result
         (recur (next-permutation permutation)
                (concat result (list permutation)))))))

I went back to writing tests:

   (fact "finding the largest index with bigger successor"
         (find-largest-index-with-bigger-successor [1 2]) => 0
         (find-largest-index-with-bigger-successor [1 2 3 4]) => 2
         (find-largest-index-with-bigger-successor [1 2 4 3]) => 1
         (find-largest-index-with-bigger-successor [1 3 4 2]) => 1
         (find-largest-index-with-bigger-successor [1 4 3 2]) => 0)

The test results are taken straight out of the wikipedia article. Implementing this is pretty straight-forward: we just iterate through the list, keeping track of the current position and check whether the following element is bigger than the current element. If so, we keep the current position, otherwise we keep what we had so far as the result. When we reach the end of the sequence (or there is no subsequent to compare to), we have found the largest position (index) that has a successor with a bigger value. One thing is worth pointing out: the usage of (comp pos? compare) is necessary because > does only work on numbers, but no on characters (or keywords). Why Clojure does not follow Python (which provides a general purpose operators, which use something like compare under the hood which you can override for your data types) in this aspect is beyond me.

(defn find-largest-index-with-bigger-successor [squence]
     (loop [restsq (seq squence)
           curpos 0
                   curresult nil]
        (cond (or (empty? restsq)
                  (empty? (rest restsq)))
                curresult
             ((comp pos? compare) (second restsq) (first restsq))
               (recur (rest restsq) (inc curpos) curpos)
                        :else
               (recur (rest restsq) (inc curpos)     curresult))))

The next step is finding the position of some value that is bigger than the position that we just determined. Again this is straight-forward:

   (fact "finding the largest index that has a bigger value than some other position"
         (find-largest-index-bigger-value [1 2 3 4] 2) => 3
         (find-largest-index-bigger-value [1 2 4 3] 1) => 3
         (find-largest-index-bigger-value [1 3 4 2] 1) => 2
         (find-largest-index-bigger-value [1 4 3 2] 0) => 3)

   (defn find-largest-index-bigger-value [squence index]
     (let [compval (nth (vec squence) index)]
    (loop [restsq (seq squence)
             curpos 0
             curresult nil]
            (cond (empty? restsq)
            curresult
            ((comp pos? compare) (first restsq) compval)
               (recur (rest restsq)  (inc curpos)  curpos)
                  :else
              (recur (rest restsq) (inc curpos) curresult)))))

We now have to swap these two elements which is easy enough to do with vectors:

   (fact "swapping two positions in a sequence"
         (swap-positions [1 2 3 4] 2 3) => [1 2 4 3]
         (swap-positions [1 2 4 3] 1 3) => [1 3 4 2]
         (swap-positions [1 3 4 2] 1 2) => [1 4 3 2]
         (swap-positions [1 4 3 2] 0 3) => [2 4 3 1])

   (defn swap-positions [squence k l]
    (let [seqvec (vec squence)]
      (assoc (assoc seqvec k (nth seqvec l))
         l (nth seqvec k))))

I first fiddled around with take and drop to avoid converting the input sequence to a vector but this makes the code much more complex. Why there is no general-purpose position-based replace is, again, beyond me — there are a number of discussions around (the lack of) a general-purpose subsequence function which point out issues with complexity (code-and performance-wise), but I doubt that most manually-crafted workarounds lead to any better solutions. Maybe I’m missing something obvious here.

Next, we need to reverse the rest of the sequence behind the position which we just swapped. The example in the wikipedia article is not entirely clear for longer remainders, but some tests revealed that the right position is really the one we just used, like this:

   (fact "reverse the tail of a sequence"
         (reverse-tail [1 2 4 3] 2) => [1 2 3 4]
         (reverse-tail [1 3 4 2] 1) => [1 2 4 3]
         (reverse-tail [1 4 3 2] 0) => [2 3 4 1])

   (defn reverse-tail [squence tail-position]
    (let [prefix (take tail-position squence)
       tail (drop tail-position squence)
       revtail (reverse tail)]
    (concat prefix revtail)))

So, with this we now have all pieces in our hands and can test the entire algorithm:

   (fact "finding the next permutation"
         (next-permutation [1 2 3 4]) => [1 2 4 3]
         (next-permutation [1 2 4 3]) => [1 3 2 4])

Which will, surprise, surprise, give the expected results. So, with this we are able to generate all 24 permutations of [1 2 3 4] and we can go back to our anagram task. Turns out that the tests would fail: I hadn’t thought about the fact that the destructuring of the character sequence (i.e. the word) would require subsequent combination of the permutation results. That’s easy enough to correct by applying str to all permutation results.

(defn generate-anagrams [word]
    (map (partial apply str) (generate-permutations word)))

Now, when you run this code with the test data given in the original kata:

   (facts "Testing the anagram implementation"
       (fact "Generating all anagrams"
         (generate-anagrams "ftw") => '("ftw" "fwt" "tfw" "twf" "wft" "wtf"))
          (let [words (split-lines (slurp "wordlist.txt"))]
            (find-anagrams "kinship" words) => '("pinkish")
            (find-anagrams "enlist" words) => '("inlets" "listen" "silent")
            (find-anagrams "boaster" words) => '("boaters" "borates")
            (find-anagrams "sinks" words) => '("skins")
            (find-anagrams "knits" words) => '("stink")
            (find-anagrams "rots" words) => '("sort")
            (find-anagrams "thelaw" words) => '("wealth")))

I ran into a StackOverflowException for “boaster” though. Looking at the code, it’s immediately obvious that there the only possible cause for this can be in generate-permutations which generates the result eagerly. So, let’s change that to a lazy variant.

(defn- gen-perms [squenze]
    (lazy-seq
        (when-let [permutation (next-permutation squenze)]
            (cons permutation (gen-perms permutation)))))

(defn generate-permutations [squence]
    (let [start-perm (sort squence)]
        (cons start-perm (gen-perms start-perm))))

I use an external helper here because we need to add the start permutation to the final result up-front and that doesn’t lend itself to a self-recursive function. Anyway, this concludes the first solution using a rather traditional algorithm.

Declarative solution

For the next solution, I intended to use something else. Last year, I had the chance to hear David Nolen talk about core.logic which reminded me a lot of the old days in which I was using Prolog for computational linguistics and logic programming. In particular I was thinking of a permutation implementation in Prolog described in Richard O’Keefe’s Craft of Prolog, which I briefly discuss below:

permutation(Xs, Ys) :-
    permutation(Xs, Ys, Ys).

permutation([],[],[]).
permutation([X|Xs], Ys1, [_|Bound]) :-
    permutation(Xs, Ys, Bound),
    insert(Ys, X, Ys1).

insert(L, X, [X|L]).
insert([H|T], X, [H|L]) :-
    insert(T,X,L).

If you would want to generate all permutations for a list [1,2,3], you would call permutation([1,2,3],Q) and your Prolog interpreter of choice (e.g. SWI-Prolog) would generate the first possible result for Q and via backtracking generate all other possible permutations.

?- permutation([1,2,3],Q).
Q = [1, 2, 3] ;
Q = [2, 1, 3] ;
Q = [2, 3, 1] ;
Q = [1, 3, 2] ;
Q = [3, 1, 2] ;
Q = [3, 2, 1].

Let’s briefly discuss the Prolog solution, this will make it easier to discuss some issues when translating this to core.logic later on. Prolog uses facts and rules to prove some query. E.g., permutation([],[],[]). is a fact asserting that the permutation of an empty list is the empty list. Anything involving :- is a rule. Prolog uses unification — hang on, you’ll see in a second what this is. Second, you see all those [X|Xs] constructions. These are basically list (de-)construction operations: they split off the first element or add an element (head) and some rest (tail) to form a new list. The point here is that if you’re calling permutation([1,2,3],Q,Q) Prolog will try to unify [1,2,3] with [X|Xs] which is possible when X=1 and Xs=[2,3]; i.e. Prolog automatically tries argument unification. The _ construct means “ignore”, “don’t care”. If we consider only the insert fact (i.e. the first statement), this fact can be used by Prolog via unification to answer queries about any value of the predicate:

?- insert([2,3],1,Q).
Q = [1, 2, 3] 
?- insert([2,3],Q,[1,2,3]).
Q = 1 
?- insert(Q,1,[1,2,3]).
Q = [2, 3] 

The key to understand how permutation works is considering how insert works: the insert rule will deconstruct the first argument (assuming it’s a list) and insert the second argument to it. This way, X will be inserted in all possible positions of the list:

?- insert([2,3],1,Q).
Q = [1, 2, 3] ;
Q = [2, 1, 3] ;
Q = [2, 3, 1].

Now, if you take a closer look at the permutation/3 rule, you’ll recognize that it first of all contains a recursive call to itself. This will basically decompose the first argument (if given) until it reaches the permutation fact governing the base case, i.e. the empty list. It will then insert the elements according to the behavior discussed above. You can think of all comma , as and including a notion of order, i.e. the insert clause will only be used after having processed the recursive call to permutation on each level, respectively. This basically implies a depth-first search — i.e. for generating the multiple values for Q, Prolog will try to find different possible combinations by retrying parts of the computation. This will in particular trigger the computation of the different results of insert/3.

Now let’s come back to Clojure’s core.logic which provides an implementation of many useful things for logic programming based on miniKanren. However, as an add-on to a functional programming language, we will have to use some special operators to translate the Prolog code. The first thing needed is the declaration of the query variables (e.g. Q) within the call to run*, without it you would never see any results (besides run* causing the inference machinery to, well, run). The next operator is == which is used for unification, which is used just as = would be in Prolog inside some rule. Sometimes you need temporary logic variables which you can introduce with fresh. There is also an explicit operator conde (similar to cond) which can be thought of as providing disjunction (or). You need this to be able to mimick Prolog’s multiple facts/rules with the same predicate, e.g. having a simple fact and a rule for permutation/3. There are also further predicates, e.g. conso which can be used to splice/construct lists. This is all nicely explained in the core.logic Primer. I actually started out trying to convert the Prolog code with not much else, like this:

(defn insert-broken [x l nl]
  (conde
    [(conso x l nl)]
    [(fresh [h t]
           (conso h t l)
           (insert-broken x t l)
           (conso h l nl))]))

You’ll note that I exchanged the position of single argument and list in order to match it with the usual argument positions of conso (or conj). Otherwise this looks like a pretty straight translation of the Prolog rules above: it’s either we can directly (via conso) (de)construct the list or we recurse. This version is broken in multiple ways, though. First of all, when you test this version, the recursive call to insert is not constrained enough wrt. the value of l, which will trigger an infinite recursion. You need to put the recursive call behind the second call to conso (cf. the discussion of my inquiry on StackOverflow). However, there is another issue lurking which you can see when comparing the results:

(defn insert-still-broken [x l nl]
  (conde
    [(conso x l nl)]
    [(fresh [h t]
          (conso h t l)
          (conso h l nl)
          (insert-still-broken x t l))]))

FAIL "Checking insert - Simple insert" at (logic_test.clj:11)
Expected: ((1 2) (2 1))
Actual: ((1 2))

FAIL "Checking insert - Simple insert" at (logic_test.clj:12)
Expected: ((1 2 3) (2 1 3) (2 3 1))
Actual: ((1 2 3))

As you can see, this version generates only a single result, inserting the element just in the first position, not in the other positions of l. The reason for this is that we are constraining the solution too much: by using l in the recursive call, we’re constraining the “result” (the value of the third argument) to the initial value of l . This is not what we are doing in the Prolog version, there l is just a temporary value generated in the recursive call. I.e. I fooled myself by basically running into a variable capture problem. So, the correct version of insert looks like this, introducing another fresh variable l1.

(defn insert [x l nl]
  (conde
    [(conso x l nl)]
    [(fresh [h t l1]
          (conso h t l)
          (conso h l1 nl)
          (insert x t l1))]))

However, the discussion on StackOverflow also pointed me to the matching predicates which are also shown, but not explained at all in the examples section of the core.logic wiki. In particular, core.logic offers a defne macro which basically provides a pattern matching facility which is remarkably close to what Prolog provides wrt. argument matching. Consider the following version of the same predicate using defne:

(defne inserto [L X NL]
    ([L X (X . L)])
    ([(H . T) X (H . L1)]
        (inserto T X L1)))

defne will basically expand into a set of conde expressions, but will also generate fresh variables and matching/unify expressions as appropriate. If you compare this version with the Prolog version, it’s easy to see the parallels: in the second rule, the given arguments to the parameter list L X NL will be tried to unify with [(H . T) X (H . L1)] (note that inserto uses the same parameter order as the Prolog version), thereby decomposing any sequence given as L into head H and tail T — this is basically the same as (conso H T L).

Having covered all those nitty-gritty details of insert, understanding permuto/3 should be straight-forward:

(defne permuto3 [I O L*]
  ([nil nil nil])
  ([() () ()])
  ([(X . Xs) Ys1 (_ . Bound)]
      (fresh [Ys]
                 (permuto3 Xs Ys Bound)
                 (inserto Ys X Ys1))))

We have two (empty) base cases and a recursive clause again. We’re decomposing the input I into (X . Xs) and unify O (typically the query variable) to Ys1. Using a fresh new variable we recurse with the sublist Xs down to produce permutations of the sublist, eventually inserting X into them. For reference, this is what the non-matching version looks like which makes the argument unification and the value decomposition much more obvious:

(defn permutation
    ([xs ys] (permutation xs ys ys))
    ([xl yl res]
    (conde 
       [(== xl '()) (== yl '()) (== res '())]
       [(== xl nil) (== yl nil) (== res nil)]
       [(fresh [_ x xs ys bound]
        (conso x xs xl)
        (permutation xs ys bound)
        (conso _ bound res)
        (insert x ys yl))])))

We can finally wrap this permutation into the same surrounding code we used for the traditional solution to this anagram kata to compute nearly the same results (the order will differ).

Wrapping up, this kata was actually quite hard to solve and took quite a while. I spend too much time trying to find the traditional solution myself before focussing on translating it to Clojure. And then it took me also way more time than I had imagined getting into core.logic, which could use quite a bit more documentation besides the primer on the basics. Anyway, core.logic looks like a very nice addition to the Clojure universe.


Fresh strawberries -- web programming in Clojure

I recently started working on a web application in Clojure. This being my first contact with quite a lot of tools / libraries for web programming with Clojure, I ran into quite some stuff that was not obvious. First some background: It’s basically a very small application which will have only a very small number of users, so we don’t need a highly interactive, reactive UI. So, no need for a single-page application, a classical web application where the server serves up a small set of pages and has all the logic is just fine.

Starting out with ring, lein-ring and compojure was pretty straight-forward, going beyond the easy stuff not so much. One such problematic issue I ran into was feeding some configuration to Jetty (JNDI resources), as lein-ring does not seem to offer any way of reading an existing jetty-web.xml file. This implies that you’re basically down to using and configuring Jetty as described in the embedding Jetty examples. This snippet is what I’m using to declare my database connection (you’ll need to have depends for clj-dbcp, org.mortbay.jetty/jetty, org.mortbay.jetty/jetty-plus, org.mortbay.jetty/jetty-naming and javax.servlet in your project.clj):

  (ns myapp.jetty-config
     (:require [clj-dbcp.core :refer [make-datasource]])
     (:import (org.mortbay.jetty.plus.naming Resource)
                 (java.util Hashtable)
                 (javax.naming InitialContext Context)))

  (defn make-my-datasource []
     (make-datasource
     {:adapter :mysql :host 'localhost :database 'mydb
      :username "myuser" :password "mypw"}))

  (defn setup-jetty-context []
     (let [ht (Hashtable.)]
       (.put ht Context/INITIAL\_CONTEXT\_FACTORY "org.mortbay.naming.InitialContextFactory")
       (.put ht Context/PROVIDER\_URL "org.mortbay.naming")
       (InitialContext. ht)
       (Resource. "java:comp/env/jdbc/etrans" (make-my-datasource))))

This being solved, during the course of the project, I aimed for running the application on Tomcat. Figuring out how to develop with Jetty and make it run on Tomcat also took quite a while. First of all, you don’t want to have the above Jetty configuration and dependencies dragged in when running lein ring war but instead rely on the “normal” persistence.xml mechanism to define persistence units. The key to the first issue is making use of leiningen’s profiles in combination with the :init keyword of the configuration options for lein ring. I.e., I removed all Jetty configuration and dependencies from the top-level ring configuration in my project.clj and added a :dev profile to it which has the required additional dependencies mentioned above plus a :ring section with an :init key pointing to a setup function which calls the required Jetty configuration functions:

    (defproject myproject "0.1.0-SNAPSHOT"
  :description "A small web application"
  :dependencies [[org.clojure/clojure "1.5.1"]
                 [org.clojure/tools.logging "0.2.6"]
                             ;; .. other dependencies, but nothing Jetty related
                            ]
  :plugins [[lein-ring "0.8.8"]]
  :ring {:handler myapp.handler/app
  :web-xml "war-resources/web.xml"}
  :profiles
  {:dev {:dependencies [[javax.servlet/servlet-api "2.5"]
                      [org.mortbay.jetty/jetty "6.1.23"]
                      ;; etc.
                                      ]
  :ring {:handler myapp.handler/app
         :nrepl {:start? true}
     :init myapp.jetty-config/setup-jetty}}})

Making use of the persistence.xml file turned out to be pretty easy: You can place “normal” Tomcat configuration files in the directory specified by the :war-resources key. E.g. you can your persistence.xml in war-resources/META-INF/. You can also specify the location of your web.xml explicitly. I did an initial lein ring war run and have it generate an initial web.xml and modified it later on as needed. Another issue was that I wanted to run the application under Tomcat in parallel with other applications, i.e. with it’s own named web application context, which is simple enough. But running under Tomcat as some non-exclusive webapp (i.e. installing not as ROOT.war) forces you to use relative links. But how do you find out the right prefix for servlet global resources? Useful info gets added to your request in the :context, :path-info, :servlet-context and :servlet-context-path, according to ring.util.servlet. I used this wrapper for figure out what I needed:

  (defn wrap-show-request-context [handler]
     (fn [request]
        (when-let [context (:context request)]
           (logging/info (str "Request with context " context)))
        (when-let [pathinfo (:path-info request)]
           (logging/info (str "Request with path-info " pathinfo)))
        (when-let [servlet-context (:servlet-context request)]
           (logging/info (str "Request with servlet-context " servlet-context)))
        (when-let [servlet-context-path (:servlet-context-path request)]
           (logging/info (str "Request with servlet-context-path " servlet-pathinfo)))
        (-> request
             handler)))

Finally, I had a dependency on a local library which I resolve via the localrepo extension to leiningen. However, lein-ring knows nothing about local-repo extension, hence lein ring uberwar will not include the dependencies that only exist in your ~/.m2/repository/. The workaround is to make a WEB-INF/lib directory and place all required Jars in it, although figuring out all Jars that you need is a very tedious process.

Given the small scale of the web application, I decided very early on that hiccup should fit my small requirements, although I wanted to use bootstrap, too. The one problem I ran into is a typical functional programming issue: information becomes available at some point which is required exclusively in some other function way down the chain of callers. The concrete problem I ran into was actually using the right servlet-context in the main-page layout template to request the right resources. Handing down the parameter (or the request itself) to the view seems wrong, because then I basically make use of internal knowledge about the inner workings of my pages. I finally extended the wrapper above to bind a dynamic variable:

    (def ^:dynamic \*app-context\* nil)

    (defn wrap-context [handler]
       (fn [request]
         ;; ... logging code elided ...
        (binding [\*app-context\* (str (:context request) "/")]
           (-> request
                handler))))

I then refer/use this variable during the page setup. I decided against using an atom or some such, because I specifically don’t want to have other requests/threads to see/modify the current value from some other thread — this is actually intended to be a request-only accessible (global) variable. Not sure whether this is the best way to go, though, I guess there are more elegant solutions.

I had seen a talk on liberator at EuroClojure 2013, which promises a pretty declarative way of sorting out how to react to web requests, which looked nice enough to give it a try. Although things can get somewhat complex rather quickly, I still like it. One such issue is that the interaction between liberator’s context argument and compojure parameters isn’t always obvious. Basically all handlers need to take a context argument (possibly ignored), as is detailed somewhat more in the documentation of the execution model. Resources (i.e. their definitions) can have arguments as well, however, which are not related to the context — they need to be provided then when your handle calls the resource. I.e. check the following resource definition:

 (defresource someresource [someid]
       :available-media-types ["text/html" "application/json"]
       :method-allowed? (request-method-in :get)
       :exists? (fn [context]
                     {:something (find-something someid)})
           :handle-ok (fn [context]
                         (myview (get context :something))))

     (defroutes myapp
        (GET "/something/:someid" [someid] (someresource someid)))

The route definition uses compojures parameter extraction to hand someid over to the resource. The context argument to the decision function :exists? and to the handler :handle-ok doesn’t contain this, but you can access it via the parameter someid of the resource. Another thing which had me scratching my head was how to do a redirect on a GET request which I wanted as a result of logging out of the application. You have to combine liberator.representation/ring-response with ring.util.response/redirect. moved-permanently? and handle-moved-permanently or handle-see-other seem not to be intended for this, at least I was not able to use them for this purpose.

(defresource logout
  :available-media-types ["text/html"]
  :method-allowed? (request-method-in :get)
  :handle-ok (fn [context]
           (ring-response
            (response/redirect
             (str (get-in context [:request :context]) "/")))))

I also have resources which handle GET and POST/PUT requests (the same resource). You can combine this and in principle it’s as simple as it sounds, but figuring out which handler gets called when is not: for example, is handle-ok called after put! created something? Liberator does come with quite some documentation and it’s tracing facility is really helpful, but it really takes some time getting into it. For instance, I don’t think it’s documented that the list of available media-types determines the default media-type generated. I.e. if your request doesn’t specify that you prefer to accept application/json and your available media-types has text/html as the first element, you’ll get text/html. I ran into this with my unit tests: you may need to use ring.mock.response/header to set the “accept” header (and don’t get fooled by ring.mock.request/content-type). I think, in the middle to long term my biggest concern is the question how stable this API is. It’s currently at 0.10, which leaves quite a lot of numbers before implying any notion of stability.

Then, I also used friend for the authentication, which is an interesting library whose current version of 0.2 again makes me wonder about the stability of the current API. I’m currently staying very close to the bare minimum of features and so far haven’t really run into any bigger technical issues. There is one thing that is quite apparent though: while it provides role-based authorization, friend is currently missing any idea of access rights, i.e. it’s lacking a connection of roles to rights. Hence you guard functionality with calls to authorize, not with a declaration of the required rights. Friend is also seriously lacking documentation, e.g. about *identity* or current-authentication which you might want to use to determine data about the currently logged-in user. I also had issues with testing my code after introducing friend, because it’s not at all obvious how to provide the needed authorization from test code. I ended up using midje’s mocking machinery (i.e. provided) to mock authorized? which is underlying friend’s authorize macro. Note that provided assumes that the one checkable directly above is actually calling the mocked code, you can’t have provided mocking for two or more checkables or in some let construct (background pre-requisites can help with this).

Finally, I used java-jdbc for accessing the database. Again, I’m wondering how stable is this API? There were large changes with 0.3, which broke compatibility with at least one of the SQL DSLs mentioned on the project page. This includes splitting out the old SQL DSL into it’s own module java-jdbc/dsl. And alas, documentation and examples are also somewhat lacking. At least the unit tests were useful to figure things out.

So, what have I seen so far? I guess a lot of interesting technology, notable lack of satisfying documentation and APIs in widely varying degrees of stability.

ObTitle: Franz Ferdinand, from “Right thoughts, right words, right action”


Page 1 of 3, totaling 29 entries