Jan 19

In the last article on the right size of a user story, I discussed some aspects that contribute to finding an answer that works in your context: clarity, functionality and effort. In this post, I want to discuss the question on how you can move from a big feature to smaller stories. The assumption here is that we already understand the requirements “good enough” (i.e. maybe captured in a use case document), so we have reached already a quite clear picture of what needs to be done.

Špalek na štípání
By Chmee2 (Own work) [GFDL, CC-BY-SA-3.0 or (CC BY-SA 2.5-2.0-1.0 ], via Wikimedia Commons

There is quite bit of literature on these questions, with the book “Agile software requirements” from Dean Leffingwell being one particular source of input (the SAFe page on stories lists an overview). Being asked, what would you recommend, the usual consultant answer would be “it depends”, but I think it is safe to come up with some rules of thumbs.

0. Don’t split

The bottom line for story splitting is that you should avoid it if you can. Having a lot of very small stories will come at a price — loss of overview is an important one, and if taken too far, it can also become hard to reason about the business value of a story. Formulating the “… in order to” part of the classical user story template (“As a <role>, I want <action> in order to reach <my goal>”) is often a good sanity check: if you have a hard time finding the wording for “my goal” is usual a sign that you’ve split up the requirement too much. There is one classical mistake that can easily result in this situation: if you break down a requirement by components or systems, it’s highly likely you will end up with problems to formulate a reasonable business goal — most probably you split off an implementation task, cf. below.

Assuming the main reason is to fit the work into one iteration (or one week of the iteration, cf. above), if your story is already “small enough” in terms of effort, then there is not much point in going for even smaller stories. You will run into the occasional discussion that a story should ideally be able to finish (“move to done”) within one day, but it should be up to the team to decide whether pushing for this really brings any benefits like additional helpful feedback. More often than not, the team might just try to game numbers, e.g. the number of stories they can finish. There is an obvious tension here between the benefits of small stories (delivering working code early, enabling quick feedback) and unnecessary overhead (splitting in a meaningful way, documenting, managing and tracking the bigger amount of stories), that needs to be balanced according to the situation at hand, but try hard to avoid splitting into too tiny details.

The other question is, do you need to split the requirement now? If the requirement does not need to be elaborated now, because it is not planned for the current or the next iteration, there is no need to split it now — potentially this requirement could change over time anyway, so preliminarily driving for details might actually cause more problems than help. For instance, when a bigger feature involves multiple workflow steps and you split the feature into stories, where each implements one step at a time, you might have a hard time in your backlog tying these pieces / steps together, if you want to revamp these steps in its entirety.

1. Do the minimal thing first

If you think you need to split down a requirement, try to identify the “minimal viable functionality” as the first story. This is the minimum functionality that the system needs to provide that brings some (potentially rather minimal) benefit to the user. This functionality becomes the first story, whereas all other aspects of the requirements are addressed (as extensions) in separate stories.

Ballpoint-pen-parts
Which parts provide the minimal usable functionality?
Foto assumed to be by Pavel Krok~commonswiki under [CC BY-SA 2.5], via [Wikimedia Commons]

As an example, let’s again think about user preferences for some system: the minimum useful functionality for a user consists probably of setting the preferences, saving, loading and applying them. Simple counting these steps leads you to the observation that this alone consists of at least four different functionalities. However, exporting my preferences to a file and importing them on a separate system is certainly not “minimal”. Depending on the intended users, even providing an inline editor to set the preferences might not be part of the minimum functionality. These functionalities could become separate user stories. The drawback of using this approach is that the resulting stories are not truly independent any more: you cannot prioritize exporting and importing preferences to be done before setting, saving and loading them.

Now, of course, this minimal viable functionality might still consist of a lot of work which might still be “too big” (in implementation effort, not fitting into one iteration). Here all the ideas from Dean Leffingwell are applicable, in particular trying to split by persona or business rule variations, data entry methods or data variations or workflow steps. In our example, setting preferences and saving as well as loading and applying preferences could be two different workflow steps (and hence different stories). However, this approach would probably result in stories that are too small according to rule of thumb 0 above. When splitting by workflow steps, try to have a minimal complete workflow in one story or at least plan to have this minimal workflow implemented in one iteration.

2. Optimize later

A good guiding principle in programming is to avoid premature optimization. While the principle talks mostly about optimizing performance of computations, we can also use it as a rule of thumbs for splitting requirements. The idea is to to take the simplest approach that could possibly work. This has a rather broad applicability in terms of technical aspects (architecture), but also usability and other so-called quality or non-functional requirements. After implementing this first rough version, use the feedback to check if it is “good enough” — e.g., whether performance, usability or security requirements are met already. If not, optimize these “-ilities” later on.

Some people might find this suggestion offensive. Security, for instance, is one quality that many claim cannot be designed into a system as an afterthought. But this is also not what I’m suggesting. Instead, I’m suggesting to formulate these “-ilities” requirements separately to isolate the functional core of the big requirement and to implement the optimizations later. This is quite independent of the need to design them — whether you design them upfront (as this might be part of your constraints in a “Definiton of ready”) or within a Sprint is a different question, but of course you need to think about the required qualities that the functionality needs to have. The idea here is simple: If your story is too big, put the proverbial lipstick on the pig in the next round. If you are afraid that your product owner will not care about security and will lower the priority of implementing these security stories to the point that they will never be implemented, your product has a bigger problem anyway (and probably security problems already) or the usage scenario that the product owner has in mind might be different from your expectation. Probably in both cases spelling out the required quality (and the accompanying work and effort) in a separate story could even be helpful to spawn a discussion.

That being said, breaking down your stories by functionality should be preferred. This is especially true if your stakeholders have some expectation of these qualities — if you miss out on these qualities, you’ll likely receive feedback to that effect. At least, you’ll be able to point to another story then to address it. Again, finding a good balance is important, but also not always easy.

3. Avoid splitting by technical components

Fat Slice pepperoni pizza slice
Who would prefer the naked pizza crust over a slice of the real thing?
By BrokenSphere (Own work) [GFDL or CC BY 3.0 ], via [Wikimedia Commons]

Technical user stories that are intended to cover work specific to some component are an anti-pattern. Typically they are brought up by people which are still stuck in the old trap to split work according to responsibility for features vs. components, cf. the discussion of features and components in SAFe. Don’t split by technical components unless the component can deliver business value / user functionality on its own.

Typically you cannot test some functionality in a reasonable way for some component on its own (e.g. the UI alone, not connected to the database). Hence, you will also not be able to get any feedback for the implementation of the story. E.g., saving and loading your preferences from the database is certainly a reasonable technical task, but how would your stakeholder test this without the UI being implemented? How would your product owner find out that there is a minor problem in the saved data if the only way to access it requires sending a web service request or proficiency in your programming language of choice? So, even though you might have finished the work on the component completely, you will not get the quick feedback that we would like to enable with small stories. And, of course, the component alone will also not be delivered as a result of your iteration, so it’s also not adding any business value at this point either.

This rule of thumb also can be debated. There are situations where one component can provide value already — e.g. a CMS system where you extend some functionality on the authoring environment first, so the editors can easily start using it. Even if the readers cannot yet see the result of the new features, some business value might already come from easing the work the editors. This case should not be confused with a different line of reasoning: For the UX designer, for instance, it might make a lot of sense to provide feedback on a UI preview that is not connected to the database. Your DB admin might already point out problems with the schema etc. However, this is inter-team feedback then, not customer feedback. In the end, the measure of progress in agile projects is delivering working software to the customer — and from a user perspective, both of these examples fall short, as essential functionality is missing. Instead, go back to rule 1 and try to identify the minimal complete slice of functionality that delivers value to the customer.

Task splitting and Spikes

There is, however, of course also a time when it is exactly the right thing to do to break down a story into technical tasks that need to be finished: this happens when the story is ready and due to be worked on given its priority, typically as part of a Sprint Planning 2 meeting in a Scrum setup. Especially for engineers it is easy to fall into the trap to confuse story splitting (as requirement work upfront) with task splitting (organization of concrete work within a sprint). A focus on technological considerations (and often also personal priorities) can quickly lead to splitting work such that it is developer-friendly, but does not deliver results to customers quickly. Let’s discuss some aspects of how task splitting influences work organization.

Stating the obvious, splitting technical work into tasks requires technical understanding, both of the system in which the requirement is implemented and of the technologies involved. But it is also not always the case that the team understands the system or the technology well enough. In this case, the team can decide to spend some time on research work to figure out what exactly needs to be done. Don’t define this as a “research implementation” task and plan to complete the entire story, you likely have no idea how much work it takes to complete everything anyway. Instead treat the research work separately, this is usually called a spike. As the goal of a spike is to learn something and not everything, a strict timebox should be defined (usually not exceeding one or two days), after which the spike is called of, regardless of how much insight was generated. Spikes usually are hence not estimated in story points, but reduce the capacity of the team, so you should also try to not have more than or two spikes per iteration.

Felling snags on fire line around the Coquille CCC camp, Siskiyou National Forest (3226072285)
Work in parallel to get it done
By OSU Special Collections & Archives, via [Wikimedia Commons}

Splitting a story into tasks follows different rules: for instance, splitting by components is usually a good idea, especially if your team does not consist only of full-stack developers. Also, even with full-stack developers, you need to start somewhere, which good software architecture practices would demand to be clarifying interfaces between the various components. Which brings us to one big issue of task splitting: Identifying dependencies between tasks. These are usually directly following from the organization of the components. Of course, as with user stories, you should strive to avoid dependencies in the first place (and use dependency injection and other mechanisms to decouple your dependencies). This is not only good engineering practice, but also allows to parallelize the resulting work (i.e. have multiple people work on the same story). Teams sometimes split their work such that only one developer is actively working on a story that consists of multiple tasks. This usually leads to the situation that many stories are started but they all take a long while to be completed. Instead try to distribute work between multiple people: the frontend ninja can start his JS foo, while the backend expert whacks the database in parallel, modifying the previously defined contract as required during the work. If the team can work on multiple tasks of a story in parallel, then this does not only encourage collaboration and team spirit, it will also ensure that the highest priority work items are finished first.

But the tasks to complete a story usually also involves other work besides coding, e.g. UX work or manual exploratory testing. More often than not, these are not happening as agile “on the spot” as other work, instead it’s a quite common anti-pattern to treat an iteration like a mini waterfall: UX (of everything) happens at the start, then development, testing (of everything) happens at the end. This can lead to a lot of unneccessary work done upfront (UX for stories which the team actually doesn’t get around to) or too late (testing starts so late it cannot be completed in the Sprint). Again, try to do the work only when it is necessary and try to do more stuff in parallel.

Posted by Holger Schauer

Defined tags for this entry: ,
Jan 3

One of the recurrent puzzles with regard to requirements management that I keep running into is the question what would be the “right” size of a requirement. It often starts with some people coming up with very tiny and very specific requirements and then some very huge, completely fuzzy ones to go along with them. Then they have heared or read somewhere that in agile setups requirements should be small and wonder what this means and how to get there.

Cartoon by Geek&Poke on Epics
Advanced Scrum
Cartoon by Oliver Widder, licensed under CC BY 3.0 from Geek&Poke
These huge requirements are often called “epics”: An epic is a requirement that is either felt to be too “large” or is too “fuzzy” and one way to avoid never-ending dramas is to split these into tinier user stories. Actually, here we already have two very distinct aspects: a requirement engineering question of how to move from unclear feature ideas to concrete detailed requirements on the one hand and a requirement management question on how to organize requirements such that they are appropriately prepared for implementation. I discussed the requirement engineering part in an article on agile conception already, so I’ll focus on the latter part here. This size question is actually two-fold: What is a reasonable small story? And how can we break down a big feature into several smaller stories? I’ll address the first one below, the next one in an upcoming article on splitting requirements.

Why small stories?

Let’s first explore the reasons to go for smaller stories. As mentioned above, there is often a felt need to come up with “small” user stories, whatever “small” might mean. This is often even part of making a requirement “ready” for the team to start working on it (cf. also how a felt need for “small” stories is a hint for immature teams).

The main reason behind moving from “big” feature requirements to “small” user stories typically boils down to the team wanting to ensure that the work they take on can be finished within one iteration. From the agile point of view, that’s a good idea: We want to deliver value in terms of working software as quickly as possible. But even if your delivery is not exactly continuously and also doesn’t always happen at the end of the iteration, there is a lot of value in using small stories: developers, UX, product owner, stakeholders should give and take feedback on the completed work to ensure the development is on the right track (in terms of “Did we make it right?” and validation “Did we make the right thing?”). The earlier we learn about problems, the better, so we can make easy adjustments. Arguably small stories can be implemented more quickly and thereby help getting the necessary feedbacks quickly. Also, in case adjustments need to be made, changing a small story is hopefully easy (instead of changing a big requirement). Small user stories also help planning: for one thing, it is usually far easier to come up with an accurate and precise estimate if you’ve broken down a big problem (“epic”) into tiny pieces.

What is a reasonable small story?

This one has an easy answer: any requirement that the team feels can be surely finished within one sprint is reasonably small.

Does size equal effort?

Unfortunately, this answer isn’t particularly useful. It is simply taking the outlined main reason for small stories as a minimal acceptance criteria. If we take a step back, the size of requirement seems here to be directly bound to the effort it takes to implement it. It looks like a continuum: from “no implementation effort” to “takes the entire team the entire sprint to implement”.

But a story that takes the team the entire sprint to implement is almost certainly still too big. Think about it from a risk perspective: let’s assume your estimate is just a bit wrong (as in “too low”), e.g. you discover some previously unknown technical problem during the implementation. In this case you will not be able to finish the story within the sprint — from a work management perspective, you clearly want to avoid this situation, as you’ll run into all sorts of unwanted discussions (from relevant ones like when you get feedback to irrelevant management ones like what can be done to help the team deliver on time or to come up with better estimates).

Burn up charts
Effect of story size on burn up chart
Some teams take a very harsh stance at this and demand a “ready” story should not take longer than one day (of implementation and testing). I don’t believe in such strict rules but think a good upper bound is “no longer than one week”. With story point estimation this recommendation might look strange, but the simple idea is that if you know your team delivers ca. 20 points per two-weeks iteration, a 13 point story is probably too big, whereas an 8 point story could be small enough.

Of course, you can use the same reasoning of risk and project management to demand even smaller stories: e.g., with a 20 point velocity of a team, and three stories with 8 story points each, how would you plan your next sprint? Commit to two stories (summing up to 16 points) and leave one as a stretch goal? If you start working on the stretch goal but are not able to finish it, your efforts and results will not be visible (for the stretch goal). Instead, if you could break down the stretch goal into a 3 and a 5 point story, you might be able to commit to more points. Also, your burn-up chart might look a lot smoother with smaller stories than with big stories, cf. figure 2. But beware: what we really want is quick feedback and delivery of valuable increments. If smaller stories help you with this, great, if not, you’re likely just gaming numbers or making management happy (cf. also Johanna Rothman on the value of burndown and burnup) .

Consider functionality, too

However, the size of a requirement could also be addressed differently. Boris Gloger (e.g. in his German book on agile estimation) argues that estimates should not be based on effort or risk but on functionality. From this perspective, the smallest reasonable story is one whose implementation will enable one functionality.

This looks somewhat circular, as we have now moved from one fuzzy term (size) to another (functionality), in particular as in todays complex systems “one” functionality could be composed of a lot of different pieces playing together nicely. But requirement work is all about defining functionality in terms of specific, detailed user activities or system behaviors and as such is arguably easier to grasp and figure out than some number of story points.

It is usually the best idea to look at the functionality from the user’s point of view, because this will most closely resemble the business requirement anyway. But note that, without going further into architecture considerations, you might want to define functionality per system / component if you can define and observe the expected behavior per component. The “observe” part here is important as it is implemented, observable behavior that you can give feedback on. For the very same reason of getting feedback from humans, defining functionality from the user perspective is more preferable.

Now that we have discussed what the smallest story in terms of functionality could be, this might not be the most reasonable choice in your context. As a made-up example, consider the feature of where you want to save some document in some system. An additional feature could be to save a document under a different name (“save as”). From the implementation point of view, the difference in functionalities might be so small that it’s very easy to implement both in one go, so the team might decide that splitting off the “save as” is not worth it.

Another example could be where one user visible functionality (e.g. sending an order) requires the interaction of two systems (e.g. sending the order from some web application to some ERP system). Considering just one system part (e.g. the web application sending) as a user story might not be reasonable — it cannot be tested alone, hence doesn’t allow for feedback and also doesn’t provide any business value without the other system part. I believe this is where the entire team needs to give its input and consider not only the system functionalities but also (all) the tasks and the what is the minimum that makes a product increment valuable.

So now we have two different boundaries: whereas size in terms of functionality provides us with a lower boundary for a requirement (“implement at least one user-visible functionality”), size in terms of development effort provides an upper boundary (“can be implemented by the team in at most one week of an iteration”).

Clarity is required

Now, I believe there is a third “size” of a requirement which is related to clarity or level of understanding. Recall that at the start of many projects (or product phases) you’ll have a lot of very tiny and detailed requirements and some “bigger” requirements which have not been explored (or engineered) yet (which is what I would call “feature ideas”). An example of the former would be a user story along the lines of “As an editor, I need a save-as button right next to the save button so that I can give a new name under which to save my document”, whereas the latter would be something along the lines of “As a user, I want to save my preferences”. Quite obviously, the former is quite “small” and “clear”: it already says that a button is needed and what the expected behavior is. “User preferences” instead is just a label which might contain a huge list of things that need to be made customizable by the user and unless we break down this list into details, this requirement needs to be considered “big”. Another way of looking at this “big” story is that it is certainly not “ready” for implementation, so it is surely not a reasonable small-enough story. But even for the much clearer “save-as” story, there might be some lack of clarity on how to give the filename or other aspects (e.g. technical issues). Again, obviously we have a continuum between very clear and completely unclear.

Unfortunately, clarity alone will not be enough to decide whether a requirement is “small” enough: you can get very clear requirements that consist of lots of functionalities and are way too big to be implemented within one iteration (never mind huge use case or detailed specifications). However, from the perspective of “can we start implementing this requirement”, clarity of what needs to be implemented is the key ingredient that distinguishes a requirement from being “ready” from others. Clarity is a prerequisite to breaking it down into different functionalities and come up with a reasonable estimate (although it’s often also true that trying to break a big unclear requirement into smaller bits and pieces to come up with good estimates will help getting more clarity). Turning this around, even if you have a story that is reasonably small in terms of effort, if the team is raising concerns about the certainty of the estimates, the team is probably not sure whether it understands the functionalities that needs to be implemented and hence the story is probably still not “ready for implementation”.

Summing up, this leaves us with three important aspects influencing the “right” size of a story: clarity, functionality and effort. In the next article, I’ll explore some rules of thumb to split a big requirement into smaller parts.

Posted by Holger Schauer

Defined tags for this entry: ,
Jul 9

I’m happy that there are apparently still people writing articles about Scrum that are not ranting and venting about how agile sucks: Colin Higgins explains that Scrum is not our saviour. It basically boils down to the old no silver bullet saying:

No, SCRUM, or any of the other agile methodologies will not just magically fix your throughput problems after you adopt every facet of them religiously. These methodologies will just show you how badly your development practices suck.

I agree completely. But there are some agile development practices or tools which can help you. Go look up extreme programming (XP) and what it urged developers to do. CI/CD, TDD and pair programming are by now widely accepted development practises which can help, because they aim at bringing up problems way before they end up in production.

However, there is a boundary to what these tools alone can offer. A lack of knowledge of the technologies or a general lack of problem solving skills are fundamental issues which you can overcome only by learning and gaining experience, if at all. Another quote from the linked article:

Agile process alone won’t fix your development problems, you need technical excellence for successful projects. The two are necessary for a continually successful organization.

I would agree in principle, but there are two important issues here:

1) “A fool with a tool is still a fool” and “technical excellence” are the two extreme ends of a large scale. It’s the same as with car driving: most developers like to see themselves among the top 5% of drivers developers, but as we all know there is not enough room for everyone within the 5%. Still, too many developers would shrug the call to technical excellence away, because they believe they are so great or they have no idea what technical excellence actually means. For these developers, there is always someone else to blame.

2) Requiring the proverbial “rock star developer” is — for reasons discussed above — in my opinion not a viable option. In other words, while Scrum might not be a saviour, we’re not doing anybody a favor if we instead declare “technical excellence” as the silver bullet that can lead to success. Yes, we should strive for technical excellence and do our best to improve our capabilities (technical and otherwise). This is what the article rightfully demands of developers: do the best you can and try to continuously improve. But there still needs to be a middle ground, where it’s possible to run projects / products successfully with the people you currently have. Agile helps with that because it makes clear what is realistically possible and it aims for continuous improvement, e.g. with retrospectives. To repeat the quote from above: “These methodologies will just show you how badly your development practices suck.” That’s a great starting point. It’s a learning opportunity. Go use it.

Note that this is important not only on a personal level, but also on an organizational level: Scrum Masters should try to change the company culture to value learning. This requires room for experiments and for failures, where room translates to time and safety (i.e. support). On the sprint level, this might translate to using spikes, but on an organizational level it might involve things like prototypes, split testing and other ideas from the lean startup camp.

Posted by Holger Schauer

Defined tags for this entry:
Mar 2

I recently did some coaching on how agile teams work, what roles there are and what are their respective responsibilities. It’s probably no surprise that one of the main topics was self-organization and what this implies for how agile teams work, also how to get there and how to solve the various puzzles (like distributed teams) occurring when working with bigger team setups. You’ll find the slides below. I was asked to talk about these issues because the team was having trouble understanding what was expected of them and how to organize themselves. In my experience, that’s a quite common situation to run into when the team members (or at least some of them) have no or only little experience working in an agile setup. But even when you already have some experience, it might happen that the team or at least important members of it lose the understanding of how to work together. And of course, that’s much more likely to occur when you interrupt the team working together, e.g. by putting some people onto different jobs, even if only for a short time. I guess this is what people outside of a team (i.e. managers) often forget: teams are usally only able to perform great over a longer time if they can focus on “it”, i.e. working on their tasks and working on their understanding of how to work together as a team.

Anyway, I started by thinking about why people are always so eager to talk about roles and responsibilities. I came to the conclusion that it’s mainly a matter of safety or lack of it. That’s why you have these discussions mainly at the beginning of a collaboration. The important point that became apparent to me is that roles and responsibilities don’t matter in a performing team (performing in the sense of Tuckman’s model). And the corollary to that is: When you don’t know what is expected of you (as a team member), your team is really just a group of people and you don’t know how to work together as a team (yet). Let me elaborate this: I believe for a group of people to work together successfully it’s most vital to have a common understanding of what needs to be done and to just do it (as the old sport company’s saying goes). Of course, you will need somebody who will just do it, but does this imply that you have to have a formal org-chart upfront that tells you who has which role? What if you happen upon things that need to be done that you did not think about when compiling your org-chart? You will need somebody to step up anyway, so why not just leave it at just do what needs to be done? Even better if all of your team have a common understanding on how to do things (cf. for example a commonly agreed upon definition of done).

My main (and very obvious) advice on solving the issue of “I don’t know what is expected of me” is to compile a list of things that need to be worked on and to get started. Whether this “list of things” is a backlog or just a todo list, whether it’s on a physical board, in an Excel file or in some other system, is not as important as actually working with this list: re-visit this compiled list every so often to update it, prioritize items, move things around, augment and transform it as needed. And of course, remove things to do from the list that are actually done. But do this in a way that it is clearly visible all the time to all team members. If you modify this list openly so that everyone can participate, everybody will learn about what is going on, you can have discussions that will help team members understand why something needs to be done first and which things are problematic. And, last but not least, people will communicate over helping each other out, give input on difficult questions and work together.

The other important ingredient for me is giving and receiving feedback. If you’re working in a Scrum-like fashion, review and retrospective meetings at the end of an iteration are the ideal places for this. I believe that the awkwardness that in particular the retrospectives have for people new to agile is an indicator of a company culture, team setup or personal attitude that does not honour open discussions and feedback. Retrospectives make people feel uncomfortable especially if they are feeling insecure. However, without feedback it’s difficult to learn, so organizations should in my opinion strive to provide safe environments for their people. A simple example of this is to have the retrospective facilitator use the “setting the stage phase” to ensure that the people feel safe (cf. this article on creating safe environments for retrospectives), maybe by having the team come up with rules for safe communication on their own or by simply reminding them of already existing rules. However, the important point for me is that team members should learn to give and take feedback in any situation. No, I don’t mean that it’s appropriate to interrupt your colleague every five minutes to complain about some weird code or that you should accept getting interrupted every so often. But, e.g., if Maria believes there are many issues with the code that Peter checks in everyday, it should be okay for both of them when she arranges a discussion between her and Peter on how to solve the problem. If Maria can provide her feedback in a way that is both personal and helpful, this will help building trust which leads to a feeling of safety wrt. working with each other. Typically, such discussions in between (as well as during retrospectives) will address many of the fears, uncertainty and doubt that trouble people about what is expected of them.

I believe that discussions about how a team should organize itself and surrounding tools for this (e.g. the RACI model) are mainly waste. Remember, during the time you’re discussing your organization, you don’t add anything of value for your clients. I would rather spend the time on making progress on the tasks that are waiting for attention and figure out how we work together while actually accomplishing something.

Finally, if you have some feedback on my slide desk, I would be eager to hear it.

Posted by Holger Schauer

Defined tags for this entry: , ,
Feb 11

Tom Boulet has recently triggered a discussion in the Agile+ community on Google+ about user stories, cf. the following quote from his inquiry:

A couple weeks ago, I heard someone say “The stories weren’t detailed enough for the development team..” I thought, Wha? Stories are meant to be high level and if you use acceptance criteria this comes after the team starts looking at them, doesn’t it? It sounded like the person felt that Stories had to be “fleshed out” somehow before development could get started. […] Also, I’ve recently seen talk of how stories should be small enough to code in a day or two (I’d call that a task) and another reference to “codable stories.” None of this sounds right to me. It sounds like a large number of Agilists are seeing stories as part of the design pre-development. Has the definition of Story shifted on me?

The resulting discussion is highly insightful. I particularly like one reply by Alan Dayley, which I quote in some parts:

It is hard to describe in brief sentences all the forces that pull organizations to keep them thinking in lots of details. A few might be: - A culture and history of Big Design Up Front that makes it hard for feature definition to happen just in time. - A continued need for long range planning requiring estimates for work that will possibly take months before it actually starts. - Skilled technical people who are accustomed to talking about technical details instead of the needs of the user. And more…

The most powerful and subtle force to cause “user stories” to grow is a continued lack of focus for the people defining features and those creating the features.

For example, if I am Product Owner of 5-12 products working with 3 teams who are all working on multiple products, I am not able to think of a feature, collaborate on it’s definition as it is built, see it created and then think about the next feature. I have to document my thinking of every feature because one hour from now I have to think about other features that also might not be built right away.

The key to being truly Agile is to finish stuff. The more inventory of ideas, features, undeployed code, etc. that we have, the less Agile we can be.

This highly resonates with my thinking. But it also got me thinking and reminded me of another discussion I participated in in one of Xing’s agile groups: for many companies, being able to keep stories high-level, have the Product Owner engage with the developement team just in time and clarify all those nitty-gritty details in short-enough time that the team can implement it, sound like the wet dream of agile heaven. I, too, have mostly seen teams in which there was substantial time invested by the Product Owner (at least), often accompanied by the Scrum Master and some development team members, to “groom the backlog” and to prepare the stories for the next sprint. This preparation, which more often than not involves adding lots of details according to some definition of ready, typically also includes story splitting. I’ve seen development teams straight out refuse to work on stories “too big” according to some arbitrarily set size limit. I guess the reasons behind this basically boil down to fighting FUD (fear, uncertainty and doubt), cf. this article listing potential benefits of small stories:

There are lots of benefits from having smaller stories.

From a user and usage perspective, reducing the story size forces me to understand users and usage better. It forces me to separate what really is valuable or useful to the users from what’s merely interesting. From a release management perspective having lots of smaller stories allows us more flexibility with planning, prioritizing, and cutting scope. From a development and detailed design perspective, a smaller story often implies we know more about what to build - there’s less unknown, less risk – less risk of the development work mushrooming into something much larger.

There are two obvious problems with this:

  • The development and release management perspective and their respective goals are not a good match for the business value that is driving a story. As a result of this, when you decompose stories into slizes too thin, the business value proposition, i.e. the “in order to ” part of Mike Cohn’s classical user story template, often becomes very awkward and only understandable in the direct context of some user interaction. Quick, tell me, what is the real business value of “As an order maker, I need to enter my credentials to access my orders” (cf. Godjo Adzic’s rant that writing “As a user” doesn’t make it a user story)?

  • The other problem is that often “understand users and usage better” is actually design work, as also asked by Tom Boulet in the discussion: “Can requirements really be separated from design? As requirements become more detailed don’t they really become design? Does this constrain the design part of development? […] And ultimately, can good design really be done without coding trial and error? Does a customer and PO really know what they want up front?” Here, the article by Mary Poppendieck on the Product Owner problem is helpful: according to her, the most important role of the development team is product design:

    The entire team needs to be part of the design decision process. Team members should have the level of knowledge of the problems and opportunities being addressed necessary for them to contribute their unique perspective to the product design. Only when decisions cannot be made by the development team would they be resolved by a product leader. The main team-facing responsibility of the product leader is to ensure the people doing the detailed design have a clear understanding of the overall product direction.

It would seem a development team with a “definition of ready” that requires lots of details and splitting stories into thin slizes is lacking this “clear understanding” and arguably refusing to take responsibility for designing the product. Again, my gut feeling is that this is a sign of the maturity of an organization, wrt. to agility: e.g., building this clear understanding typically requires a longevity of development teams which often clashes with the “project-based” approach taken by many organizations. And if development teams don’t know the market, the customers they are building the product for, where should this understanding come from?

In the discussion, Heiko Stapf also referred to the agile principle of “individuals and interactions over processes and tools” in the discussion, noting that “Having PO/Tester/Techguys sitting together finding a common understanding and language” is more akin to favor individuals and interactions, whereas “having the PO writing down requirements and passing them (in written form)” is more about process/tool. I find this to be indeed the case, where sometimes the underlying cause is that it’s difficult to arrange for enough room (time) to have the necessary interaction. But it might also be simply lack of collaboration, i.e. when developers just feel it’s not their job to work on (the formulation and details of) user stories.

I’ve also seen teams run into problems with getting the right information in time. IMHO they are either due to lack of knowledge on the Product Owner’s side or lack of empowerment. In the former case, when some detail needs to be clarified, the Product Owner might not have the necessary domain expertise required to answer the question right away and she needs to go back to some domain expert (which might not be immediately available, delays occur etc.). The later case is where the Product Owner has an opinion on some matter (e.g., whether to use this or that design) but feels she needs to go back to someone major stakeholder (potentially multiple). Also, when the team (PO+dev team) haven’t yet found the path to close collaboration, the resolution of some issue might require too much time to allow for implementation. In such scenarios, the idea of using a definition of ready with small stories is actually aimed at ensuring that some work can actually be finished in some time box at all (this is linked to ensuring a story is “feasible”, cf. Roman Pichler’s take on the “definition of ready”.

Of course, there are also legitimate reasons to split stories which are linked to other organizational issues: e.g., when the story is big it can’t possibly implemented by a single team in given small time box and hence needs to be split over multiple teams or multiple time boxes. Or when the requirement is yet more an epic than a “ready” story and when split, business value can still still be obvious for the smaller stories. And then there is also the process where you start out with a vague “feature idea” and need to understand it more before it would be ready for implementation, as I’ve described in my post on agile conception.

In summary, I would say that a large backlog with lots of tiny stories is a clear indicator of a lack of maturity of the team and / or organization. In an ideal world, all our teams would be mature enough that nobody would ever want to have too many detailed written specifications upfront, but alas in our reality we have to work hard to get there. Unfortunately, there is no easy answer to this than to try and persist: gain the knowledge and the confidence, build the tools and the trust you need.

ObTitle: Modest Mouse, from “The Moon & Antarctica”

Posted by Holger Schauer

Defined tags for this entry: ,
Oct 28

Two things are clear for every project and every product: there is always a start and an end. I’ll ignore the end for now and would like to take a closer look at the start. One has to wonder: where do all these projects or new products come from? Obviously, somebody finds out at one point that there is a need for something and this something is the nucleus of the perspective development. If this idea requires significant effort, it’s typically necessary to convince the people responsible for the money that is worthwhile to pursue the idea. Let’s take a closer look at what this something actually is and how it changes and needs to change before the developers can finally go make it real.

To use a more descriptive tag, I will use the term feature idea for now, although we could well talk about ideas for a new product or product line. Feature ideas in a very early stage are rarely more than a single, simple statement or even just a few descriptive keywords like “I would like a button here where I can save my settings” or “remember me functionality”. Of course, whoever came up with the idea has typically second thoughts, more or less fuzzy but these are not communicated, neither discussed nor finalized.

A feature idea
Quite often these second thoughts are related to some existing systems or solutions, even if it’s just to describe what the respective solutions do not provide (note that these solutions might not be your own). Typically feature ideas are about some specific expected behavior of some system and basically boil down to what needs to be build. This is all good: feature ideas don’t need further elaboration or lengthy descriptions in an early state.

How do you proceed from here to the nitty-gritty details that finally need to be done? In traditional settings, to get from the idea for a larger feature to development you need to jump over the budget hurdle, which usually requires detailed analysis results, including business cases, return on investment calculations, and implementation plans. The latter consist of solution outline, milestones, resource plans and cost requirements. It is important to note that to come up with all this substantial efforts need to be invested already. The bigger the need for precise information, the bigger the required investment. And then at some point you hopefully get the okay to move ahead. But in case the idea is turned down, all this investment in detailed plans was just wasted. In agile setups, we would like to avoid that waste. Still, of course, the need for getting a decision on where the business wants to invest its money does not go away. But in order to do so, does the senior management really need all those details? They don’t. There is a need for accuracy, not for precision (cf. for a description of the difference). Asking for all the details is a sure sign for lack of trust, but I have never seen any senior management trying to verify that all those precise details were actually accurate, which probably explains why the demand for documents and spreadsheets is all there is. This is then typically combined with taking all plans as promises that need to be kept and not as means to enable successful investments (command-and-control vs. plan-to-replan).

What is surely needed boils down to just one thing: understanding the value of an investment. It’s often much easier to determine what you want or can invest than to precisely estimate upfront what you need to invest (cf. this discussion of the no estimates idea which shows the wonderful relevant Dilbert strip on the same issue). On top-level, investment is essentially tied to strategy: e.g. in the Scaled Agile Framework (SAFe, cf. Leffingwell et al.), investment themes determine the relation between budget (resources) and the program portfolio of business epics. This essentially requires an understanding how each possible business epic or feature idea contributes to the overall company strategy (this is basically applying the Lean principle of “optimize the whole” to feature ideas).

We essentially danced around the big elephant in the room now: feature ideas often describe expected behavior, but we need to know about the value of an idea. But if all we have is a simple statement (and not the full-fledged analysis we would like to avoid) of some idea, how can we know its value? The essential insight is that it might not be necessary to understand the precise value yet. The earlier in the process we are, the less is it necessary to perform lengthy and detailed analysis: determining the value of some idea is only a means to further a decision on where to invest further. However, at least it should be possible to judge the value of one feature idea in relation or better said in comparison to other ideas, as Johanna Rothman discusses in her article on why cost is the wrong question for evaluation projects: at least in an early stage, it might be enough to use a very simple kind of rough value sizing, e.g. putting an “M” on one feature idea in comparison to a “L” for another. It is important to recognize that the value of ideas will change over time: in particular so when looked at in relation to other things your business needs to think about, as new ideas come up , some ideas get implemented and other discarded. Accounting for these changes over time is why planning to replan is important in agile methodologies. An important concept in agile methodologies that helps making this clear is the cost of delay: the idea is that you try to understand or predict how the value contribution of some feature will change over time, in particular what impact not pursuing the idea will have. This is most useful for prioritizing by cost of delay of what to invest in — obviously, as CoD is all about loss of value over time, as time moves on you want to reconsider your priorities (where you want to drill down further with an analysis etc.). (An alternative coming from the lean camp is to use an impact-effort matrix, but such a matrix needs an estimate about the effort required and also treats value as an absolute over time, which it usually isn’t. Cf. this article about cost of delay vs. WSJF in SAFe for a discussion why cost of delay and value considerations are not the same).

Inevitably, when you try to think about the value of some feature idea, you will think about what this simple statements like “remember me functionality” should actually mean. Very often the way in which we name or describe a potential feature is often a confusion (and at best a conflation) of problem and solution space and that can make it quite difficult to judge it’s value. As such, naive feature descriptions can quickly become a big impediment. For one thing, they are — due to being explicitly simple — usually ambiguous and / or often hard to understand. Does “remember functionality” describe that I stay logged in on the website or does it mean that the website remembers the visitor and shows specific ads related to his interests or both? This is often the reason why some people attribute completely different value to an idea. Worse, they typically don’t give any clue about the need behind the wanted behavior nor about the value this behavior brings. Just as bad is that simple statements of the feature idea in terms of expected behaviour (“I want a button that …”) limits creativity by already pointing in the direction of a possible solution: maybe there are other options which are much better suited, both from a usability point of view but also with regard to the required functionality. So the conflation of problem and solution is actually problematic for understanding the problem and for designing the solution.

So what is the process to move from a feature idea to the solution design? I think it’s useful to see feature idea statements as promises for analysis. The analysis of a feature idea should essentially focus on understanding the user and / or business needs the idea is related to. The stereotypical user story template “As a <type of persona>, I want to <perform X> in order <to achive Y>” is a useful tool in this respect as it is explicitly constructed to include and name the need (cf. Gojko Adzic rant Writing “As a user” does not make it a user story how this can go wrong when taken on too easy). Now you might not have such a story or might not have it yet and maybe it’s not clear what this simple feature idea is about. That’s the time to engage with whoever seems able to contribute, but first and foremost it should probably be the primary beneficiaries of the feature idea. Traditional requirements engineering practice would suggest to involve all relevant stakeholders (perspective users, management, marketing, legal department, …) and to employ whatever RE tool seems reasonable to use (e.g. brainstorming, workshops, open interviews, etc.). In agile setups, determining who actually is a stakeholder which needs to be heard is the responsibility of the product owner. I find that it is usually also a good idea to involve the team, because they will typically have a good understanding of the solution space and should also know about the envisioned user needs. Don’t take it too far, though: If you are still trying to analyze in order to understand the value of some idea, i.e. you are still at an early stage (inception phase as the disciplined agile delivery folks would have it) of your planned feature, you should probably try to keep it light-weight.

You will likely get a lot of input on your feature idea and many in the form of comparable simple statements to what you originally started out with. Understanding the needs requires going beyond the level of simple statements or simple suggestions of simple solutions (Laura Klein has a nice description of how difficult that can be: Sometimes users will tell you that they want a toaster in their car, when what they really mean is that they don’t have time to make breakfast in the morning.). And although the famous 5 Whys might come across somewhat strange or invasive in the course of an interview, it might take that many attempts to actually understand the problem that needs to be solved. If you’re lucky, a potential user will offer some simple insight as to the problem with simple statements such as “I would like a button here so I can save my settings, so I don’t have to adjust them again and again”. Further analysis might reveal that the real problem is that there is a usability problem and the user forgets to adjust the settings too often or that the user is a regular user which would like to spend less time with repetitive tasks. The main point here is that this process is about understanding the needs, which typically requires challenging any superficial and obvious ideas: this is both about maximizing the amount of work not done (simplicity) and about building solutions that satisfy the user.

Change in understanding over time
Change in understanding over time

This is now where the team basically takes over: the team is responsible for working out the solution, not only constructing / implementing it, but essentially designing the solution. This is not to say that the product owner or other stakeholders cannot or should not contribute ideas on how to solve it, quite to the contrary. It is quite essential to have stakeholders around to enable the knowledge transfer of needs from the users or stakeholders to the team. So while the separate paragraphs of this text might suggest a handover from product owner to development team, from needs to solutions, ideally this should be a smooth transition with lots of participation of the entire team and face-to-face communication. Tools like Roman Pichler’s product canvas are useful, but there is a reason why the agile manifesto recommends people interactions over tool usage. Again, for working out the solutions many ideas and tools can and should be exploited, from brainstorming over UI scribbles to models and prototypes. Typically when designing the solution, many alternatives come to mind and different people from different backgrounds will have different opinions and favorites. This is actually good, because you can embrace this diversity to think about the best possible solution. However, the best solution might not actually be the solution you want to implement, often the simplest thing that could possibly work is a reasonable way to move forward: this minimizes the effort required, thereby enabling faster time to market and thereby enabling earlier feedback on whether this solution actually works (this is the tension between trying to build the best thing vs. learning fast from feedback about what actually proves to work). People with lots of experience in designing solutions will typically also come up with all kind of challenging questions which might require drilling further down into the needs (e.g. questioning if the feature is targeted at the right people or what the legal department has to say to some feature). This will trigger more learning about the feature under consideration. But this should not be taken as an excuse to run endless analysis or design meetings: it is often a better approach to end a discussion with a decision and move ahead and find out later whether this decision was actually flawed and needs correction. It is often a good idea to consider the risks associated with some idea: the more risk, the closer you want to consider your options. In general, when in doubt about which option to chose, think about what data would be helpful to decide: performing light-weight experiments might be useful for minimizing these risks (e.g. technical spikes or usability tests with paper-pen sketches).

A brief terminology recap: an epic is just some requirement (feature idea) that is so “large” and / or uncertain that it needs to be broken down, i.e. nobody could take an epic and just implement it, the developers just have to come back and ask lots of questions. If you wanted to you could say after breaking down an epic, all you end up with are features but in agile circles these are usually split up into even smaller stories (cf. the image below). Of course, this is a completely arbitrary and often also highly context-dependent distinction. It’s also fairly common to use the user story template for describing epics. I used the term feature idea instead of epics to bring in some notion of time: epics start out as (feature or other) ideas. If you think about the timeline again, it’s clear that the closer you are to implementing some feature, the more details you need to know, i.e. the higher your level of understanding of the feature needs to be. And this is where the idea comes into play that you need to INVEST in your user stories, so that you can derive SMART tasks. This is obviously needed for the very next things you want to implement, and as any risk-aware tech guy or project lead will know, probably also a good idea for the next most things to do afterwards. Business cases are mainly needed when you are going to go for big investments and when it’s critical to understand the return on investment. But this brings us back to the discussion of value vs. cost and the discussion of precision vs. accuracy. Use cases, finally, are a useful tool when it comes to understanding some set of requirements in the context of a larger system. They can also be useful for driving discussions with stakeholders and are often a useful starting point if you like to model your problem / design before implementation. I see use cases as a useful tool on the feature level, to cover a broader range of more detailed stories. As such it does not matter much when to describe a use case: if it helps you drive your understanding of some problem, use it before drilling down to more fine-grained descriptions. If you already have a lot of details and you fear to lose the overview, it can also make sense to reconstruct some use case from more detailed stories.

Uncertainty vs. time
Uncertainty vs. time

Finally, let’s step back briefly and re-consider who is part of this solution team. It’s quite common that for feature ideas of different sizes, different people will make up the solution team. I.e. when your feature idea is concerned with some detail of some larger system, the team might just be the developer team (in the Scrum sense). For bigger features, projects or programs, it’s quite common that lead developers, design and usability specialists, solution and / or enterprise architects and maybe operational staff will play an important part in working out the solution. Whether these people are working in specialist teams (as e.g. SAFe assumes) or are part of the development teams is not of much importance, but having members of those teams working on the solution design is quite essential: if this is not the case, in the best case, the final developers will only have to rediscover and understand all the good reasons why you came up with the solution in the first place, in order to get it right. Worst, you might end up with frustrated developers which don’t take responsibility and don’t care about the build solution, because they feel they have no saying in its design.

Posted by Holger Schauer

Defined tags for this entry: ,
Apr 15

Last week, I visited this years Goto Zürich 2013, which is a two-day conference with tutorials wrapped around it. Zurich Topic areas range from lean start-up over technology to a leaders track. I spent most of my first day on the so-called leaders track as here most talks revolved around adopting agile and lean methods, but also went to some more technical talks in other tracks, too. I’ll summarize most of the talks and my impression below. Btw., the slides to most of the talks can be found on the conference website, more precisely on the schedule overview for the two days.

The conference started out with a keynote by Scott Ambler on his work on Disciplined agile delivery which provides a framework on where decisions are needed when implementing agile methods in a larger setup, drawing from many different methods like Scrum, XP, Kanban, the Scaled Agile Framework and many others. The first part of his talk was mainly concerned with the question if agile holds up to its promise of delivering ‘better’, which he tries to answer with regular surveys. These seem to confirm that team size, location, complexity and methods used do have an impact. Not surprisingly, small team size are more successful than larger, co-located teams better than distributed ones and simple projects are much more likely to succeed than complex ones. And of course, there are still projects using waterfall-like approaches that succeed, while the difference between an iterative and an agile approach are rather minimal. Still, however, the number of project failures are always high, even for simple projects there are more failures than one might expect.

The first talk here was about Spinning by Ralph Westphal, which is well-known e.g. for his association with the Clean Code Developer initiative. His premise was that today’s business reality is a constant change of priority which is at conflict with assuming bigger time boxes during which developers can focus on some goal. His answer is that we should ‘seize the day’ (as Dan North might have put it) and deliver ‘value’ daily. It’s important that this value should be something worthwhile to the customer and the customer should be able to give feedback daily as well. I’m wondering whether it is really always possible to accomplish this, e.g. fixing a bug might require analysis well beyond a day. But even if you might not always hold up to the idea, it might still be a worthwhile guideline for organizing work. Another question which I don’t have an obvious answer for is the question who should be in charge to decide what should be worked on on any given day? Ralph required a thorough triage to be carried out to avoid wasting time, but it’s unclear whether this job belongs e.g. in the hands of the product owner.

Dominik Maximi spoke next about ‘Hostile waters’, e.g. how company culture might influence your chances and approaches to introducing agile in an organization. He made the important point that every company culture exists because it is (or was in the past) successful to work like that. This needs to be respected when you want to change something fundamentally. He then gave a nice overview of the Schneider model on how to classify company culture. Non-representative survey results indicate that ‘agile’ has a similar characteristics to ‘collaborative’ or ‘cultivation culture’, but doesn’t fit in so nicely with a ‘competence’ culture or ‘control’ (no surprises here). Changing the mindset of a company might take up to 7-10 years. Dominik finally discussed John P. Kotter’s work on change steps to implement agile.

Continue reading "Goto conference Zürich 2013"

Posted by Holger Schauer

Defined tags for this entry: , , , ,
Oct 29

The need for estimates is a fact of (project management) life, whether you’re using a traditional approach or agile ones. The latter try to do away with any exact numbers and instead use anything from t-shirt sizes over gummi bears to story points and treat the idea of estimates as a tool that is mainly intended for the team. However, as to be seen in this article posted on the ScrumAlliance website on story points versus task hours the matter of concrete time estimates comes up every now and then — there are customers and managers that want it and you have to figure out how your estimate / velocity is related to the team’s capacity for any given sprint. TL;DR:

For me, story point is high-level estimation of complexity made before sprint planning.[…]The task-hour estimation, on the other hand, is a low-level estimation[…]should be done during sprint planning for highest possible accuracy.

But, just like every other project manager, I made the experience that there is no such thing as “highest possible accuracy”, regardless of the number of ‘points’ you’re estimating (three-point estimates, anybody? How does that fit in with agile methods?) What does somewhat “work” (for varying degrees of “somehat”) is measuring how much work is ‘roughly’ possible in an iteration (aka velocity in Scrum) and estimating the ‘entire’ efforts according to the law of big numbers, but this is exactly not task-hour estimation. I’ve also worked with task hour estimates during the sprint planning as well, but I wouldn’t rely on the results. Exchanging estimates between developers (aka planning poker) has the main effect of quickly leading to a common understanding of the tasks that need to be done, which in my experience has much more value than any estimate. The insight that numeric estimates are not all that useful (to say the least) is also discussed in this highly opionated article on stop using story points — this is not about going back to estimating in hours or days, but about teams reaching a level of competence, expertise and confidence in their respective context where estimates are just waste.

However, reaching this transcendental level might be hard to reach for many teams, due to lack of experience or because of other more context dependent reasins. Joachim Seibert discusses in his nice article for the “ObjectSpektrum” magazine (5/12) “Agile estimating in teams” (behind a paywall, unfortunately and only in German, sorry), that if you need to come up with the big number for budgeting, you should strive to look at the entire picture. I.e. estimating the overall effort is more important than trying to completely estimate everything down to the tiniest detail — which isn’t possible anyway. Seibert lists two key ingredients for that: estimating a reference story which you stick to throughout the project and comparing the relative complexity of requirements, essentially sorting them by ‘size’. This is easy to forget with planning poker, but is at the very heart of the two other techniques that Seibert discusses: estimation game (by Steve Bockman, apparently) and magic estimation (by Boris Gloger, AFAICT). Both focus on building a common understanding of the relative complexity of the features to be build and are much simpler to apply to lots of stories. If you add velocity to the picture, i.e. you actively measure how much you can build and you can reach a stable project setup (compare this nice sweet spot for agile methods by Philippe Kruchten), you might be able to end up with something useful from all the effort you put into estimation.

Posted by Holger Schauer

Defined tags for this entry: ,
Apr 20

One aspect I think is important for a Scrum Master or Project Manager is to make sure that your team doesn’t go on a trip to Vienna (if that term doesn’t ring a bell, search for “Tom DeMarco peopleware”). Quite contrary to popular management belief, I think in general it’s not okay if “occasionally” somebody on the team “puts in some extra work”. There is a reason why many agile methodologies insist on keeping a sustainable pace. Besides all of the very good reasons for making sure your team members stay healthy (see this Burnout story as a negative example), there is also a management point to it: your understanding of what the team is capable of (in terms of results/effort, aka velocity) decreases substantially if you have to take “heroic behaviour” into account. It’s particular bad when you don’t see the connection between reached goals and involved effort, i.e. when team members just move their card from “working” to “done” late in the evening without making clear that it involved five hours more than initially estimated.

Heroic behaviour just can’t be counted on, because nobody will be able to keep it up over a substantial amount of time (that’s the very definition of not being substainable). It’s highly understandable that project members after having committed to some goal can be tempted to go out of their way to reach it. What team members might miss is that “heroic behaviour” can only have an influence on the “time” aspect of the magic triangle of “time, budget and quality”. Extra effort is just that: effort. Hence, it comes with a cost, with the cost it just takes to reach the goal. I’ve also seen that there is a misunderstanding of the term “commitment”. It’s not an unconditional promise of “I can do that task with the effort I think it takes”, there is also the implicit condition of “I understand correctly what the task involves and there is no other external negative influence” (e.g. the urgent bug that needs to be looked at, or the lack of sleep during three days of the week due to the kids being sick at home).

Commitment to a particular goal might at times conflict with taking responsibility for the project as a whole. As a general rule of thumb it’s nearly always much more important to think about the entire project / the big picture than about a small aspect of it. There is the exceptional situation that needs exceptional reaction and maybe exceptional effort. But it’s important to treat it like an exceptional situation. And for these exceptional situations it’s vital that they get treated like a mini project: they should have a clear purpose and have fixed start and end dates. Plus, they should come with a compensation. Scrum Masters and project managers alike should communicate clearly that exceptions are exceptions, not the rule. And team members should clearly communicate that it takes what it takes. When it comes to professional work, follow the 501 manifesto (in case you don’t directly understand the “501” part like I did: it’s not about jeans, but about leaving at 5:01pm).

ObTitle: Morrissey, from “Ringleader of the tormentors”

Posted by Holger Schauer

Defined tags for this entry: ,
Oct 24

In a recent meeting, a colleague of mine mentioned that we wanted to use agile development this time with that new project and that I would provide some insight as an ‘agile development expert’ (not that this would be a term I would use). This in turn brought me some curious looks and a pretty general question during the discussion: what is the benefit of an agile development model from a business perspective anyway?

If I remember correctly, my answer went in the direction that with an agile approach you can change your mind about what you really need/want to implement throughout the project, if you learn something ‘new’ about what is really required. The second important aspect is that you get quicker feedback which allows for more influence in case things don’t turn out right. After the meeting I started pondering the question a little, as I had the impression that my answer wasn’t that convincing, so what you find below is a more detailed written elaboration of it.

Basically, what “you can change your mind throughout the project” boils down to is that agile development gives you more options to make use of opportunities coming up along the way/throughout the time you need to implement the project. The most important idea here is that you don’t make decisions too early, because that limits your options. For an example, let’s assume that one feature we specify is that we want to have a ‘Facebook like’-button on the order page. Now, if we’re following the waterfall style ‘implementation follows after specification has been completed’ development model, you would specify that and design the page with the button and at some time in the future, you hopefully get your Facebook button. But maybe during the hypothetical nine months from now in which development happens, what if Germany’s privacy laws enforce new severe restrictions on the use of such buttons and, btw., Google+ is now the new hip site you have to support? You have to go through a (with larger features typically quite costly) change process, redo the specification, etc. This essentially makes the cost you had in writing the specification as well as for the design etc. a double problem: you didn’t get out any ROI, obviously, but additionally time has passed while writing specs, doing design etc. in which you didn’t really work on any results for the customer (lost opportunity: gather new clients with at least some new features).

So, you would have been better off with having the idea that at some point throughout the project you want ‘social media integration for marketing reasons’ and delaying what to do exactly until you’re really there (from a priority point of view) where this is a feature that you want implemented — i.e. you do the specification when your planning that feature to get implemented in the next development phase (which is typically two to four weeks with an agile development team). The idea is to take small, well defined steps that can be taken fast and starting out with the most important/valuable ones first, rather than to spend a lot of time on some ‘complete specification’ which withers fast throughout the projects implementation time.

The problematic point here is obviously, that it’s pretty difficult to say when exactly is the right point in time to make a decision. In general, there is no ‘right point’ — also often called the last responsible moment after which some opportunity is lost (I think the phrase was coined by Mary and Tom Poppendieck, see the excerpt from ‘Lean software development’) — but multiple points in time, depending on many aspects. Influencing aspects are things like business opportunities (e.g. a new feature nobody else has at that time) or reducing risks (projects risks) (cf. Alistair Coburn reconsidering the least responsible moment). The best answer I currently have seen when to make a decision is that it should be taken when you have ‘enough knowledge’, i.e. when you have carefully considered your options and evaluated them (cf. real options). Even thinking about options alone is typically a highly valuable undertaking in itself, because all to often people just assume that the ‘obvious way’ is the best one.

So, the general answer is probably along the way of: for the business side, agile development has the benefit that it provides more flexibility and more influence throughout the project phase while at the same time (possibly) generating real value/revenue more early. There is, of course, more to it that also has appeal from a business perspective, but that’s probably the main point. Feel free to add different opionions.

Posted by Holger Schauer

Defined tags for this entry: ,
Oct 22

If you just stumbled over this blog entry searching for what the heck all that buzzing about agile methods amounts to, I may have unfortunate news for you: An agile approach to development might for a lot of people all over the world, but may be not for you. For starters, take a look at the agile manifesto and its principles. These have quite a lot of implications, directly translating into presuppositions about you, if you wanted to participate in an agile development project.

Now, perhaps you’re the well communicating, team oriented developer type who happens to work in a surrounding which you respect and general feel well motivated. But real life experience with your typical geek or ex-geek-turned-professional-developer suggests that having a hard time with direct communication, face-to-face, happening very often with those pesky guys and girls you don’t really like is a more likely scenario (“Eeek, those business and marketing people” anybody?). Now, you might say, baah, I know some communication is a necessary evil of professional software development, but maybe I can get away with my old habits of trading documents (specs, bug reports, etc.) against direct communication, since that is the trusted old way and what do you mean by face-to-face communication being more effective? Everybody hates those horrible meetings eating up all your time, right?

The point is that if you’re wanting to go agile you should better soon adapt your view or else you won’t see any of the promised benefits of all of those agile processes. Agile development is not only about using test driven design, timeboxing, iterations and releasing often. More than anything else, agility is about how people effectively interact with each other as quickly and direct as possible in order to come to solutions. To me, this basically boils down mainly to three issues: adaptability to (changing) situations, an open mind toward communication and taking responsibility for the project at hand.

Being lazy, sticking bone-headed to “trusted paths”, not seeking confrontation when it’s needed, avoiding communication because of being too busy “doing” is the anti-pattern to agility. You could keep a product and sprint backlog, present new features every other week to your customer and even use continuous integration and still would be muddling in pseudo agile, doomed to fail water. What I would suggest is that you should revisit the agile manifesto and think about what those principles might imply for you and your current behaviour. Let’s try an example: If you find that you might end up pointing fingers at your customer (or product owner to use the terminology from Scrum) because he won’t behave as demanded in the agile manifesto, you should reconsider if that fits in with what the agile manifesto demands of you: you wouldn’t do the project any good. You would build up frontiers where there should be a single team including the target of your pointed finger. This is not to say that you shouldn’t point out the defect, no communication about problems is another all too familiar reason for frustration and lack of commitment. What is necessary is direct communication and finding a way that works for all people involved.

Let me sum up this posting: if you’re interested in agile approaches to software development, that’s fine. If you don’t feel comfortable about what this might imply for you, that’s fine too. But it might also tell you that agile development approaches might not work for you.

Update: Irionically, I stumbled via Infoq about the one essential agile ingrediment which says all about it much nicer than I ever could have done, only that Mark Schumann shows the confidence that agile works for the most part whereas I hold the more pessimistic view that it’s not going to work for some, but for exactly the same reasons.

ObTitle: Hermann Hesse, “Steppenwolf”

Posted by Holger Schauer

Defined tags for this entry: