Conway's law and the role of managers

Allen Holub had an interesting (post, tweet) toot about managers deciding team structure and software architecture:

Comic of a developer thinking about managers deciding the architecture. (c) by Luxshan Ratnaravi & Mikkel Noe-Nygaar under CC BY-ND 4.0
Conway’s Law by Comic Agile, CC BY-ND 4.0

If management is setting up team structure, they are designing your architecture. They know nothing about architecture, though. Self-organizing teams are essential. (See Conway’s Law). — Allen Holub

I mean, who am I to disagree with Allen Holub, but I agree completely with his conclusion but not with this argument. It might seem a little unfair of me to react to his tiny toot with a lengthy blog post, as I’m sure Allen could talk at length about the topic with tons of good insights, so please don’t take it as a criticsm of his post but as a way to express my thinking about how I look at Conway’s law and the role of managers.

Out of the ivory tower

First of all, I guess it depends on who you are referring to with “management”. If Allen is talking about top level management, then I agree that they usually don’t know about your architecture. But is this also true for the middle managers close to the teams doing the work? This level of management should know about architecture in my humble opinion, to avoid making foolish mistakes and to understand what the developers are talking about.

I think managers should today know about both architecture and Conway’s law. Ideally, they should also know about the Inverse Conway Maneuver (as eg. discussed in Martin Fowler’s article on Conway’s Law) which is simply the idea that you could structure your teams according to the wanted design of your system. Assuming the managers understand the architecture of the system (both in terms of how it’s structured currently but also a future target state), I think that managers can make use of this strategy when organizing their teams.

Communication is key

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure. — Melvin E. Conway

Second, I think it is worth pointing out that Conway’s Law is not about how system architecture and team structure interact, but about the influence of an organization’s communication structure. Now, organizations are often re-structured with the implicit assumption that org structure matches communication structure. But this might be a bit naive, because organizations not only have a formal organization but also an informal one (see this overview of formal vs. informal organization). The latter is quite important for how communication will actually happen in my experience. So a change in the formal organization does not necessarily mean the relevant communication channels that drove your architecture before will suddenly disappear. In other words, people will remember that Eva knows a lot about the data access layer and will include her in relevant discussions, regardless of which team she’s in.

headquarters of the Romanian Architects Association
The headquarters of the Romanian Architects Association, built on the ruins of the Direcţia V Securitate, by Dimitris Kamaras, CC BY 2.0

Also, as discussed in Fowler’s article on Conway’s Law, it is questionable if Conway’s Law applies to all organization sizes. If you have two very small teams, in which communication among all people is easy and natural, there are a lot of other options than ending up with two components. Sure, if the communication between your people is complicated, you can expect to see this as friction in the architecture. So, if Fred doesn’t like working with Eva, he might be inclined to put in a completely new different data access layer. However, this might also happen if Fred and Eva are within the same group (I wouldn’t call them and their peers a team if that happens).

The Wikipedia page on Conway’s Law discusses supporting evidence that modularity of systems is directly related to how tightly coupled your organization is. So, managers shouldn’t just work on reporting lines but also on enabling good communication between the teams — while you might not be able to influence the informal structure, you can nevertheless try to set up good channels within the formal org structure. I certainly hope that you’re working in an organization which actively promotes cross-team collaboration and understands the benefit of networking, and not one in which you have to move up the org chart to find the highest shared person to convey information and to drive decisions.

End to end work

Third, managers should be aware of the idea that working in feature teams is usually better than working in component teams (cf. Roman Pichler’s more nuanced take on feature vs. component teams): the idea here is that a feature team would have all the capabilities to deliver a feature end to end, i.e. it does not depend on other teams. If your system already has a good design to support your feature development, then your cross-capability feature teams should be able to use and extend it. Naturally, if your system is big enough there might be more to know about the architecture then you can fit into the heads of your team — that’s when you’ll need again good communication channels across your organization. This will not invalidate Conway’s law but might enable that a change in the architecture isn’t just driven by one’s team decision alone.

Constant change is inevitable

Fourth, team and communication structure is a much richer and deeper topic that people have been thinking and writing about like forever and only alluding to Conway’s law ignores many other insights. E.g., team topologies is an approach that approaches an organization structure from the point of view of communication meachanisms and minimizing cognitive load of the people working within it. If you take a look at the team topologies in a nutshell you’ll see that it tries to also tackle the “pushing against Conway’s law”. If you follow the route as outlined in how to get started with team topologies you end up looking at the same fundamental pieces: the architecture and communication routes. That’s a tool that managers can use (and there are more, like Jurgen Appelo’s unfix framework).

And then you have all the good insights from Heidi Helfand that these days we’re constantly changing our team structures anyways, cf. Dynamic Reteaming. As she discusses, our teams change for so many reasons, people leaving, people joining, specific organizational needs etc. It’s only natural that a team structure will evolve over time, just like system architecture does (team topologies also expects to evolve your organization over time). It sounds a little bit over the top to me, to always pull the teams together and trigger a self-selection event in these cases (not that Alan is suggesting this). If we would reduce our thinking about team structure and reteaming to the technical or system architecture view only, we would swap one mistake (ignoring the technical implications) with another (ignoring people interactions and other needs), which would arguably be even worse.

Conclusion

So, what does leave us with? That managers should move around people like chess pieces, like they’ve always done and that’s that? No, of course not. As I said, I’m very much in agreement that self-organizing teams are essential. The people which are close to the work should have an important say in discussing the organizational structure. They can bring in the knowledge about architecture and technical dependencies, they will also bring in their preferences and personal needs and all of this is very important. But I also believe managers need to be involved, too, to clarify other needs to be considered (eg. business needs, future organization plans like hiring, etc). Plus you don’t want to burden engineers with having to handle the constant re-teaming all the time, because engineers usually don’t like it — otherwise they would have moved into management already.

Oh, and finally there’s a nice slide deck from Eberhard Wolff why he’s tired of Conway’s law which you should really click through if you’ve read this far.


It's okay to be weak

Martin Fowler writes: “I should also mention that I suspect I’m not as energetic as I used to be as I age. I’ve long known that when you’re doing very creative work, such as writing or programming, the useful hours you can do in a day is rather less than the accepted industrial eight. I’ve always been nagged by my conviction that I’m not working as diligently or effectively as I ought to be. Sadly I’m not getting any better at not letting that bug me.”

I love how he openly admits his weakness.

Let me clarify that I don’t mind if his weakness is real or just subjective. I think it sets a good example if a public figure like Fowler, who’s written quite a number of highly influential books and given too many great talks to list, is openly sharing his inner feelings.

In today’s work environments, there is this illusion and pressure that you have to constantly perform on the very highest level at least eight hours a day. This is hurting so many people, up to the point where good folks drop out of their jobs due to burn-out and depression. So, it’s important to recognize that we are all human after all and that the performance we are capable of is not constant. What we can deliver at work is depending on so many factors and many of them are actually outside of our control. We just have to accept that.

If in our daily work life we can all be a bit more mindful that we are collaborating with other humans and not robots, if we are open to actively listen to what is going on in the life of the people around us, and be able to connect this to our own struggles with daily delivering what we think we should be capable of — maybe then the overall work experience will improve for all of us.


Angela Merkel, HiPPOs and decision making

So Angela Merkel yesterday withdrew the decision to add another non-workday before Easter (cf. deutschland.de video of Angela Merkel’s press conference), apologizing and taking full responsibility for the half-baked idea. Many people paid her respect for this and so do I. Some thoughts on this, though:

First of all, this half-baked idea was the result of a meeting going on for far too long. As some MP said, they first heard of the idea at 23:45 in the evening, after the discussions had been running for many hours. As most knowledge workers would know, a typical effect of working too hard for too long is that the quality of your outcomes is diminishing. Too often, reflecting back on the result of yesterday’s overnight session will reveal that your fabulous idea might actually be totally off.

Second, apparently no experts were around which would have an idea of what it would entail to make this additional day-off a reality. This looks a lot like a decision based on “HiPPO”, the Highest Paid Person’s Opinion. Generally speaking, results are better if experts prepare options with effort / benefit arguments. And usually, it’s better to postpone a decision to “do it this way” until you know that it actually can be done that way.

Last but not least I find it a pity that the behavior of Angela Merkel, to openly admit a mistake, is apparently exceptional and surprising to many people. Admitting mistakes and taking responsibility should be the norm for any leader and not the exception. As a leader, you want to understand the problems your team is facing and what mistakes people are making. How else could you improve your results? But if you are not leading by example and are not open about your mistakes, it is likely that your people might have the impression that it is only okay to talk about the great accomplishments. This in turn means that everybody is afraid to make any mistake, trying to reduce risks at all costs, which will kill any positive attitude and atmosphere. Good luck trying to achieve great results in a context like that.

tagesschau.de article on Angela Merkel’s press conference to withdraw the day off before Easter


Stand and deliver -- how can a team become quicker?

Recently I met with an old colleague of mine and we talked about teams, among other things. He asked me what I would do to urge on a team to become quicker and to deliver more. My gut reaction was that this is a thin line to walk. The following delves a bit deeper into the issue.

What not to do

One particular pitfall is to ask the team to deliver more story points. I guess the motivation behind the idea is two-fold:

  1. Delivered Story Points are easy to measure.
  2. The team could work harder and then easily deliver more.

One of those assumptions is generally speaking false and it’s not the measuring point. What I usually see is not people slacking off but people struggling.

But it’s not only that this core assumption is wrong, it gets worse when you consider the likely effects of asking for more story points to deliver. First of all, note that it’s very easy for the team to deliver more story points: The team can simply raise their estimates for each story. They can raise the estimates basically on each estimation, thereby gaming the system and destroying any trust you can have in the estimates. This essentially means you’re loosing the velocity as an indicator what the team can reliably deliver in the future.

The second effect is even worse: If people feel pressure to move more stories to “done”, people will concentrate on delivering the core value — this being the functionality that the client / product owner was asking for. On the flip side, they will stop caring about quality because that is apparently not what is the main driver. As a result, people might stop doing the following things, because they actually cost time, time they can’t spend on finishing the next story:

  • testing thoroughly
  • covering the changes with automated tests
  • cleaning up the legacy code
  • refactoring your first “working” implementation into a maintainable shape

The biggest problem with this is that the effects of all this will usually not affect the current sprint, but will probably cause severe slow down later on, either because of bugs that need to be fixed later or because all the technical debt will make any further changes so much more complicated. Let me repeat this: if you don’t care deeply about the quality of what you build, the effects will slow you down, maybe not today, but certainly tomorrow.

Don’t ask for more points, instead drill deeper to figure out what is holding back the team and focus on fixing that.

Why the team can’t move faster and what to do about it

My friend then came up with the idea that it might be viable to look beyond the usual Scrum process, maybe consider Kanban or something else. My perspective (and reaction to him) is that he was on the wrong track. Yes, Scrum does add some rituals that some people consider to be unnecessary baggage that don’t add value. In my experience that’s either because the team in question is actually already a highly experienced team that delivers very successfully or because the process parts are executed so poorly that they don’t help the team. But usually the problems are elsewhere and simply changing the framework you’re using will not fix those.

Process problems

Let’s be clear: yes, in some cases you actually have a problem in the way work flows through your system. One classical scenario is that all testing work is only started at the end of a sprint, when all the implementation is done or you have too many people working on items in parallel, not being able to finish anything early. You are basically setting yourself up to find bugs too late to be fixed and very often also to have left-over testing-only work that needs to be completed in the next sprint.

If you’re using any method to inspect and adapt (e.g. regular retrospectives), this is one of the things you would expect the team to be able to identify themselves and also to take action on. Some guidance from an experienced Scrum Master can be really valuable here, she could for instance use an activity to identify the value stream through the team. Also, ideas from various frameworks can help, e.g. setting a work-in-progress limit as done in Kanban.

The other classical case of process problems is where the organization is putting up several hurdles in front of the development teams without even recognizing it. This could range from forced use of ineffective tools or low-level machines over tons of bureaucracy to using outdated micro-management practices. These process problems are far from easy to tackle and require usually a lot of help from somebody outside of the development team (e.g. the Scrum Master or some manager).

So, yes, sometimes your process is a problem, but it’s unlikely that it’s a question of Scrum or Kanban (or something else).

Worthless work doesn’t motivate

If your team doesn’t seem to have high spirit to tackle the next job, it’s probably not related to your organization. If your product is really worthless, no one will be motivated to put in any effort. I have a good friend who was literally asked to add piles of sh*t to a game his employer produced. This was a rather short lasting work relationship. I guess such an experience is the exception but nevertheless in some cases people see no value in what they produce.

Let’s assume you team is working on a genuinely useful product. This does not mean that it’s obvious to everybody that the next version or feature serves a useful purpose, too. Sometimes the people which know best why delivering a product to the customer is important are the same which don’t understand the need to explain this to the development team. But this is vital for motivating the good people that are building the product in the first place. Ignoring this often goes along with a mindset of seeing the development team as a feature factory, a black box into which you have to feed specifications and which will spit out the next version of the product on the other side.

The problem is that on the inside of the factory the value of the work remains in the dark, of course. Either feature that needs to be built is just another feature, all different but also all the same. As a result, team members will get the impression that it’s actually not all that important what exactly they are building, outside of meeting the deadline, of course. Why there is a deadline in the first place and what happens if it’s missed or whether the actual customer problem could be solved better with an entirely different solution, is a problem of other people “above my pay grade”. Of course, if the team doesn’t understand the value of what they’re building, how could they care about delivering something with better quality or with higher efficiency?

I’m convinced that the root cause of this issue is that there is no collaboration between the product managers and the development team. Try to live up to the idea that “Business people and developers must work together daily throughout the project”. It’s hard to just throw requirements over the fence without any further explanations, if the guys can ask all sorts of questions every day in the stand-up. Day to day collaboration and just spending a lot of time together (aka “socializing”) will also help with the insight that the developer team consists just of fellow humans with a brain and a genuine interest that their work should matter.

The other answer to this wide-spread problem is to make sure that at the start of a project (release, product increment, sprint etc.) the person responsible for the success and the money explains the vision and the goal that needs to be met. If you can back up this story with customer input or other market research, the better. When explaining and discussing things (the what), keep referring back to the goal and the vision (the why). This will lead to better discussions, better solutions and almost certainly to higher motivation among the team.

Technical debt

“If your code is crap, stickies on the wall won’t help” — Rich Rogers

Stickies on the wall The truth of this quote can’t be overstated in my experience. The more technical debt you accumulate, the slower your team will become as it struggles to understand all that crap. Also, code that is hard to understand will make it easy to introduce new bugs and each new bug means that your team will not work on new features. Bad code will also demotivate your people as they feel like Don Quixote fighting an endless battle against windmills that will constantly throw more bad code into their faces. There is also the broken-window syndrome: if people get the feeling that everything looks awful and is broken already, they can get the feeling that it is alright to put in even more spaghetti code.

“Continuous attention to technical excellence and good design enhances agility” — Agile manifesto

I think it’s a shame that the focus of agile has shifted from methods like XP to frameworks like Scrum or SAFe which don’t have the same emphasis on technical excellence. Don’t get me wrong, I know that a lot of the techniques which were promoted in XP like TDD are now considered state of the art for software development, regardless of the process for work organization you are using. But the thing is that it’s too easy (for people like the Scrum Master) to concentrate on getting stuff “done” and other core pieces of such frameworks and to let the technical excellence slip completely out of focus.

The right action to take is to make people constantly aware that good technical solutions which are maintainable will make life (or at least work) so much easier and nicer. Use a Definition of Done which is focusing on quality and maintainable code. Establish pair programming and code reviews. Make sure you get technical debt items into the backlog and that they are also prioritized appropriately.

Note that this is not about “gold plating” and spending endless hours on polishing code. But honor the Boy Scout rule that you should leave any place you visit a little bit cleaner than before.

Lack of expertise

One related problem is that you might have people in the team which don’t have all the necessary expertise that would allow them to work “faster” or to find high quality solutions. This usually comes in two flavors:

  • lack of general development know-how (e.g. SQL, programming language, design patterns etc.)
  • understanding of the existing code basis

Both are usually problems which will become smaller over time because your beginners will gain more experience in the fundamental technology and also in the code basis on which they are working. Be aware, however, that it’s easy to reach a level where people have learned enough to be dangerous. That’s difficult because they can both produce something that “works” but is not of the required quality. It’s easy to stay on this low-level plateau of understanding when there is no mentoring / coaching beyond this level.

Clueless Cartoon Man Looking At Different TimesThe key idea to solve this problem is to actively invest into the expertise of your team members. One idea is to have dedicated training sessions addressing the required know-how areas. However, getting a theoretical introduction into a topic is usually not enough and should only be the starting point to get hands-on experience. So, I think it is crucial that the overall team understands that the more experienced people can fundamentally help the less experienced people while doing the real work — by patiently answering questions, explaining concepts and reasons, by pair programming and general technical coaching. Note that this isn’t a one-way road (from senior to junior) either: this becomes a lot easier if the inexperienced people are confident that it’s okay to ask for help and to admit failures without having to fear retaliation — which brings us to the next point.

Team conflicts

The agile manifesto prefers “Individuals and interactions over processes and tools”, but it’s so much more easy to focus only on the latter. Especially technical people are often too happy to ignore people problems (maybe that’s why they chose to go into tech in the first place). Team conflicts can come in various different flavors, e.g. some people might not want to work together, maybe there are blame games being played or there are bad rumors being spread behind people’s back. You don’t have to be able to recite the proverbial Five dysfunctions of a team to understand why deep conflicts within a team will be a fundamental blocker on the way to more effective and efficient results. Emotions will get in the way of any discussion, technical or other.

While it is maybe obvious to have this insight into the damaging effects, actually figuring out that there is such a conflict in a team might not be so easy if there is no open fight. Fighting openly is very often avoided by all involved parties, in the name of so-called professionalism. Even if it becomes clear that there is a conflict (or multiple), the team might be very reluctant to actually discuss or resolve it. This is directly related to the conflict, of course, because people need at least some feeling of psychological safety (and ideally also trust) to be able to address problems openly.

I think it’s rare that deep conflicts within a team are resolved without involvement from the outside, regardless of whether the external person is a manager or just another colleague. This takes a lot of patience and even more empathy. An important ingredient here is to make sure that the emotional aspect of the conflict are actually coming to light.

Ironically, the prime directive for retrospectives that is intended to remind people to be respectful can sometimes be a hindrance to openly discuss the conflict. People can misunderstand “believe everybody did the best job they could” that no criticism should be voiced at all, where it’s supposed to enable trust to speak up openly about problems. Activities like Spot the elephant can help here. In my opinion, it is okay when this results in hard discussions and hurt feelings and when emotions become apparent. However, it is the responsibility of the moderator to ensure that these emotions don’t get in the way of a discussion of solutions. Ideally, the involved people should be the ones which come up with the solution. Unfortunately, in the case of deeper personal conflicts, often the involved people have no idea how to separate factual discussions from emotional aspects or have no experience how to give or react to feedback without deepening the conflict. Moderation from some experienced “outsider” is key here. Of course, in some cases, the best resolution might be to change the setup of the team, but this amounts to losing the game.

Keep it simple, stupid

In my experience, the main problem in software development is rarely the technology or the main process, but almost always people and their interactions. Note that also technical debt is coming either from inexperience or from people not collaborating enough to point out the rising quality problems. So, if you think the process is the problem, I would recommend to take a deeper look.

ObTitle: Adam & the Ants, “Stand and Deliver” from “Prince Charming”


More than the sum of its parts -- splitting requirements

In the last article on the right size of a user story, I discussed some aspects that contribute to finding an answer that works in your context: clarity, functionality and effort. In this post, I want to discuss the question on how you can move from a big feature to smaller stories. The assumption here is that we already understand the requirements “good enough” (i.e. maybe captured in a use case document), so we have reached already a quite clear picture of what needs to be done.

Špalek na štípání
By Chmee2 (Own work) [GFDL, CC-BY-SA-3.0 or (CC BY-SA 2.5-2.0-1.0 ], via Wikimedia Commons

There is quite bit of literature on these questions, with the book “Agile software requirements” from Dean Leffingwell being one particular source of input (the SAFe page on stories lists an overview). Being asked, what would you recommend, the usual consultant answer would be “it depends”, but I think it is safe to come up with some rules of thumbs.

0. Don’t split

The bottom line for story splitting is that you should avoid it if you can. Having a lot of very small stories will come at a price — loss of overview is an important one, and if taken too far, it can also become hard to reason about the business value of a story. Formulating the “… in order to” part of the classical user story template (“As a <role>, I want <action> in order to reach <my goal>”) is often a good sanity check: if you have a hard time finding the wording for “my goal” is usual a sign that you’ve split up the requirement too much. There is one classical mistake that can easily result in this situation: if you break down a requirement by components or systems, it’s highly likely you will end up with problems to formulate a reasonable business goal — most probably you split off an implementation task, cf. below.

Assuming the main reason is to fit the work into one iteration (or one week of the iteration, cf. above), if your story is already “small enough” in terms of effort, then there is not much point in going for even smaller stories. You will run into the occasional discussion that a story should ideally be able to finish (“move to done”) within one day, but it should be up to the team to decide whether pushing for this really brings any benefits like additional helpful feedback. More often than not, the team might just try to game numbers, e.g. the number of stories they can finish. There is an obvious tension here between the benefits of small stories (delivering working code early, enabling quick feedback) and unnecessary overhead (splitting in a meaningful way, documenting, managing and tracking the bigger amount of stories), that needs to be balanced according to the situation at hand, but try hard to avoid splitting into too tiny details.

The other question is, do you need to split the requirement now? If the requirement does not need to be elaborated now, because it is not planned for the current or the next iteration, there is no need to split it now — potentially this requirement could change over time anyway, so preliminarily driving for details might actually cause more problems than help. For instance, when a bigger feature involves multiple workflow steps and you split the feature into stories, where each implements one step at a time, you might have a hard time in your backlog tying these pieces / steps together, if you want to revamp these steps in its entirety.

1. Do the minimal thing first

If you think you need to split down a requirement, try to identify the “minimal viable functionality” as the first story. This is the minimum functionality that the system needs to provide that brings some (potentially rather minimal) benefit to the user. This functionality becomes the first story, whereas all other aspects of the requirements are addressed (as extensions) in separate stories.

Ballpoint-pen-parts
Which parts provide the minimal usable functionality?
Foto assumed to be by Pavel Krok~commonswiki under [CC BY-SA 2.5], via [Wikimedia Commons]

As an example, let’s again think about user preferences for some system: the minimum useful functionality for a user consists probably of setting the preferences, saving, loading and applying them. Simple counting these steps leads you to the observation that this alone consists of at least four different functionalities. However, exporting my preferences to a file and importing them on a separate system is certainly not “minimal”. Depending on the intended users, even providing an inline editor to set the preferences might not be part of the minimum functionality. These functionalities could become separate user stories. The drawback of using this approach is that the resulting stories are not truly independent any more: you cannot prioritize exporting and importing preferences to be done before setting, saving and loading them.

Now, of course, this minimal viable functionality might still consist of a lot of work which might still be “too big” (in implementation effort, not fitting into one iteration). Here all the ideas from Dean Leffingwell are applicable, in particular trying to split by persona or business rule variations, data entry methods or data variations or workflow steps. In our example, setting preferences and saving as well as loading and applying preferences could be two different workflow steps (and hence different stories). However, this approach would probably result in stories that are too small according to rule of thumb 0 above. When splitting by workflow steps, try to have a minimal complete workflow in one story or at least plan to have this minimal workflow implemented in one iteration.

2. Optimize later

A good guiding principle in programming is to avoid premature optimization. While the principle talks mostly about optimizing performance of computations, we can also use it as a rule of thumbs for splitting requirements. The idea is to to take the simplest approach that could possibly work. This has a rather broad applicability in terms of technical aspects (architecture), but also usability and other so-called quality or non-functional requirements. After implementing this first rough version, use the feedback to check if it is “good enough” — e.g., whether performance, usability or security requirements are met already. If not, optimize these “-ilities” later on.

Some people might find this suggestion offensive. Security, for instance, is one quality that many claim cannot be designed into a system as an afterthought. But this is also not what I’m suggesting. Instead, I’m suggesting to formulate these “-ilities” requirements separately to isolate the functional core of the big requirement and to implement the optimizations later. This is quite independent of the need to design them — whether you design them upfront (as this might be part of your constraints in a “Definiton of ready”) or within a Sprint is a different question, but of course you need to think about the required qualities that the functionality needs to have. The idea here is simple: If your story is too big, put the proverbial lipstick on the pig in the next round. If you are afraid that your product owner will not care about security and will lower the priority of implementing these security stories to the point that they will never be implemented, your product has a bigger problem anyway (and probably security problems already) or the usage scenario that the product owner has in mind might be different from your expectation. Probably in both cases spelling out the required quality (and the accompanying work and effort) in a separate story could even be helpful to spawn a discussion.

That being said, breaking down your stories by functionality should be preferred. This is especially true if your stakeholders have some expectation of these qualities — if you miss out on these qualities, you’ll likely receive feedback to that effect. At least, you’ll be able to point to another story then to address it. Again, finding a good balance is important, but also not always easy.

3. Avoid splitting by technical components

Fat Slice pepperoni pizza slice
Who would prefer the naked pizza crust over a slice of the real thing?
By BrokenSphere (Own work) [GFDL or CC BY 3.0 ], via [Wikimedia Commons]

Technical user stories that are intended to cover work specific to some component are an anti-pattern. Typically they are brought up by people which are still stuck in the old trap to split work according to responsibility for features vs. components, cf. the discussion of features and components in SAFe. Don’t split by technical components unless the component can deliver business value / user functionality on its own.

Typically you cannot test some functionality in a reasonable way for some component on its own (e.g. the UI alone, not connected to the database). Hence, you will also not be able to get any feedback for the implementation of the story. E.g., saving and loading your preferences from the database is certainly a reasonable technical task, but how would your stakeholder test this without the UI being implemented? How would your product owner find out that there is a minor problem in the saved data if the only way to access it requires sending a web service request or proficiency in your programming language of choice? So, even though you might have finished the work on the component completely, you will not get the quick feedback that we would like to enable with small stories. And, of course, the component alone will also not be delivered as a result of your iteration, so it’s also not adding any business value at this point either.

This rule of thumb also can be debated. There are situations where one component can provide value already — e.g. a CMS system where you extend some functionality on the authoring environment first, so the editors can easily start using it. Even if the readers cannot yet see the result of the new features, some business value might already come from easing the work the editors. This case should not be confused with a different line of reasoning: For the UX designer, for instance, it might make a lot of sense to provide feedback on a UI preview that is not connected to the database. Your DB admin might already point out problems with the schema etc. However, this is inter-team feedback then, not customer feedback. In the end, the measure of progress in agile projects is delivering working software to the customer — and from a user perspective, both of these examples fall short, as essential functionality is missing. Instead, go back to rule 1 and try to identify the minimal complete slice of functionality that delivers value to the customer.

Task splitting and Spikes

There is, however, of course also a time when it is exactly the right thing to do to break down a story into technical tasks that need to be finished: this happens when the story is ready and due to be worked on given its priority, typically as part of a Sprint Planning 2 meeting in a Scrum setup. Especially for engineers it is easy to fall into the trap to confuse story splitting (as requirement work upfront) with task splitting (organization of concrete work within a sprint). A focus on technological considerations (and often also personal priorities) can quickly lead to splitting work such that it is developer-friendly, but does not deliver results to customers quickly. Let’s discuss some aspects of how task splitting influences work organization.

Stating the obvious, splitting technical work into tasks requires technical understanding, both of the system in which the requirement is implemented and of the technologies involved. But it is also not always the case that the team understands the system or the technology well enough. In this case, the team can decide to spend some time on research work to figure out what exactly needs to be done. Don’t define this as a “research implementation” task and plan to complete the entire story, you likely have no idea how much work it takes to complete everything anyway. Instead treat the research work separately, this is usually called a spike. As the goal of a spike is to learn something and not everything, a strict timebox should be defined (usually not exceeding one or two days), after which the spike is called of, regardless of how much insight was generated. Spikes usually are hence not estimated in story points, but reduce the capacity of the team, so you should also try to not have more than or two spikes per iteration.

Felling snags on fire line around the Coquille CCC camp, Siskiyou National Forest (3226072285)
Work in parallel to get it done
By OSU Special Collections & Archives, via [Wikimedia Commons}

Splitting a story into tasks follows different rules: for instance, splitting by components is usually a good idea, especially if your team does not consist only of full-stack developers. Also, even with full-stack developers, you need to start somewhere, which good software architecture practices would demand to be clarifying interfaces between the various components. Which brings us to one big issue of task splitting: Identifying dependencies between tasks. These are usually directly following from the organization of the components. Of course, as with user stories, you should strive to avoid dependencies in the first place (and use dependency injection and other mechanisms to decouple your dependencies). This is not only good engineering practice, but also allows to parallelize the resulting work (i.e. have multiple people work on the same story). Teams sometimes split their work such that only one developer is actively working on a story that consists of multiple tasks. This usually leads to the situation that many stories are started but they all take a long while to be completed. Instead try to distribute work between multiple people: the frontend ninja can start his JS foo, while the backend expert whacks the database in parallel, modifying the previously defined contract as required during the work. If the team can work on multiple tasks of a story in parallel, then this does not only encourage collaboration and team spirit, it will also ensure that the highest priority work items are finished first.

But the tasks to complete a story usually also involves other work besides coding, e.g. UX work or manual exploratory testing. More often than not, these are not happening as agile “on the spot” as other work, instead it’s a quite common anti-pattern to treat an iteration like a mini waterfall: UX (of everything) happens at the start, then development, testing (of everything) happens at the end. This can lead to a lot of unneccessary work done upfront (UX for stories which the team actually doesn’t get around to) or too late (testing starts so late it cannot be completed in the Sprint). Again, try to do the work only when it is necessary and try to do more stuff in parallel.


Requirement anorexia -- does size matter?

One of the recurrent puzzles with regard to requirements management that I keep running into is the question what would be the “right” size of a requirement. It often starts with some people coming up with very tiny and very specific requirements and then some very huge, completely fuzzy ones to go along with them. Then they have heared or read somewhere that in agile setups requirements should be small and wonder what this means and how to get there.

Cartoon by Geek&Poke on Epics
Advanced Scrum
Cartoon by Oliver Widder, licensed under CC BY 3.0 from [Geek&Poke]( http://geek-and-poke.com/geekandpoke/2016/11/6/advanced-scrum)

These huge requirements are often called “epics”: An epic is a requirement that is either felt to be too “large” or is too “fuzzy” and one way to avoid never-ending dramas is to split these into tinier user stories. Actually, here we already have two very distinct aspects: a requirement engineering question of how to move from unclear feature ideas to concrete detailed requirements on the one hand and a requirement management question on how to organize requirements such that they are appropriately prepared for implementation. I discussed the requirement engineering part in an article on agile conception already, so I’ll focus on the latter part here. This size question is actually two-fold: What is a reasonable small story? And how can we break down a big feature into several smaller stories? I’ll address the first one below, the next one in an upcoming article on splitting requirements.

Why small stories?

Let’s first explore the reasons to go for smaller stories. As mentioned above, there is often a felt need to come up with “small” user stories, whatever “small” might mean. This is often even part of making a requirement “ready” for the team to start working on it (cf. also how a felt need for “small” stories is a hint for immature teams).

The main reason behind moving from “big” feature requirements to “small” user stories typically boils down to the team wanting to ensure that the work they take on can be finished within one iteration. From the agile point of view, that’s a good idea: We want to deliver value in terms of working software as quickly as possible. But even if your delivery is not exactly continuously and also doesn’t always happen at the end of the iteration, there is a lot of value in using small stories: developers, UX, product owner, stakeholders should give and take feedback on the completed work to ensure the development is on the right track (in terms of “Did we make it right?” and validation “Did we make the right thing?”). The earlier we learn about problems, the better, so we can make easy adjustments. Arguably small stories can be implemented more quickly and thereby help getting the necessary feedbacks quickly. Also, in case adjustments need to be made, changing a small story is hopefully easy (instead of changing a big requirement). Small user stories also help planning: for one thing, it is usually far easier to come up with an accurate and precise estimate if you’ve broken down a big problem (“epic”) into tiny pieces.

What is a reasonable small story?

This one has an easy answer: any requirement that the team feels can be surely finished within one sprint is reasonably small.

Does size equal effort?

Unfortunately, this answer isn’t particularly useful. It is simply taking the outlined main reason for small stories as a minimal acceptance criteria. If we take a step back, the size of requirement seems here to be directly bound to the effort it takes to implement it. It looks like a continuum: from “no implementation effort” to “takes the entire team the entire sprint to implement”.

But a story that takes the team the entire sprint to implement is almost certainly still too big. Think about it from a risk perspective: let’s assume your estimate is just a bit wrong (as in “too low”), e.g. you discover some previously unknown technical problem during the implementation. In this case you will not be able to finish the story within the sprint — from a work management perspective, you clearly want to avoid this situation, as you’ll run into all sorts of unwanted discussions (from relevant ones like when you get feedback to irrelevant management ones like what can be done to help the team deliver on time or to come up with better estimates).

Burn up charts
Effect of story size on burn up chart

Some teams take a very harsh stance at this and demand a “ready” story should not take longer than one day (of implementation and testing). I don’t believe in such strict rules but think a good upper bound is “no longer than one week”. With story point estimation this recommendation might look strange, but the simple idea is that if you know your team delivers ca. 20 points per two-weeks iteration, a 13 point story is probably too big, whereas an 8 point story could be small enough.

Of course, you can use the same reasoning of risk and project management to demand even smaller stories: e.g., with a 20 point velocity of a team, and three stories with 8 story points each, how would you plan your next sprint? Commit to two stories (summing up to 16 points) and leave one as a stretch goal? If you start working on the stretch goal but are not able to finish it, your efforts and results will not be visible (for the stretch goal). Instead, if you could break down the stretch goal into a 3 and a 5 point story, you might be able to commit to more points. Also, your burn-up chart might look a lot smoother with smaller stories than with big stories, cf. figure 2. But beware: what we really want is quick feedback and delivery of valuable increments. If smaller stories help you with this, great, if not, you’re likely just gaming numbers or making management happy (cf. also Johanna Rothman on the value of burndown and burnup) .

Consider functionality, too

However, the size of a requirement could also be addressed differently. Boris Gloger (e.g. in his German book on agile estimation) argues that estimates should not be based on effort or risk but on functionality. From this perspective, the smallest reasonable story is one whose implementation will enable one functionality.

This looks somewhat circular, as we have now moved from one fuzzy term (size) to another (functionality), in particular as in todays complex systems “one” functionality could be composed of a lot of different pieces playing together nicely. But requirement work is all about defining functionality in terms of specific, detailed user activities or system behaviors and as such is arguably easier to grasp and figure out than some number of story points.

It is usually the best idea to look at the functionality from the user’s point of view, because this will most closely resemble the business requirement anyway. But note that, without going further into architecture considerations, you might want to define functionality per system / component if you can define and observe the expected behavior per component. The “observe” part here is important as it is implemented, observable behavior that you can give feedback on. For the very same reason of getting feedback from humans, defining functionality from the user perspective is more preferable.

Now that we have discussed what the smallest story in terms of functionality could be, this might not be the most reasonable choice in your context. As a made-up example, consider the feature of where you want to save some document in some system. An additional feature could be to save a document under a different name (“save as”). From the implementation point of view, the difference in functionalities might be so small that it’s very easy to implement both in one go, so the team might decide that splitting off the “save as” is not worth it.

Another example could be where one user visible functionality (e.g. sending an order) requires the interaction of two systems (e.g. sending the order from some web application to some ERP system). Considering just one system part (e.g. the web application sending) as a user story might not be reasonable — it cannot be tested alone, hence doesn’t allow for feedback and also doesn’t provide any business value without the other system part. I believe this is where the entire team needs to give its input and consider not only the system functionalities but also (all) the tasks and the what is the minimum that makes a product increment valuable.

So now we have two different boundaries: whereas size in terms of functionality provides us with a lower boundary for a requirement (“implement at least one user-visible functionality”), size in terms of development effort provides an upper boundary (“can be implemented by the team in at most one week of an iteration”).

Clarity is required

Now, I believe there is a third “size” of a requirement which is related to clarity or level of understanding. Recall that at the start of many projects (or product phases) you’ll have a lot of very tiny and detailed requirements and some “bigger” requirements which have not been explored (or engineered) yet (which is what I would call “feature ideas”). An example of the former would be a user story along the lines of “As an editor, I need a save-as button right next to the save button so that I can give a new name under which to save my document”, whereas the latter would be something along the lines of “As a user, I want to save my preferences”. Quite obviously, the former is quite “small” and “clear”: it already says that a button is needed and what the expected behavior is. “User preferences” instead is just a label which might contain a huge list of things that need to be made customizable by the user and unless we break down this list into details, this requirement needs to be considered “big”. Another way of looking at this “big” story is that it is certainly not “ready” for implementation, so it is surely not a reasonable small-enough story. But even for the much clearer “save-as” story, there might be some lack of clarity on how to give the filename or other aspects (e.g. technical issues). Again, obviously we have a continuum between very clear and completely unclear.

Unfortunately, clarity alone will not be enough to decide whether a requirement is “small” enough: you can get very clear requirements that consist of lots of functionalities and are way too big to be implemented within one iteration (never mind huge use case or detailed specifications). However, from the perspective of “can we start implementing this requirement”, clarity of what needs to be implemented is the key ingredient that distinguishes a requirement from being “ready” from others. Clarity is a prerequisite to breaking it down into different functionalities and come up with a reasonable estimate (although it’s often also true that trying to break a big unclear requirement into smaller bits and pieces to come up with good estimates will help getting more clarity). Turning this around, even if you have a story that is reasonably small in terms of effort, if the team is raising concerns about the certainty of the estimates, the team is probably not sure whether it understands the functionalities that needs to be implemented and hence the story is probably still not “ready for implementation”.

Summing up, this leaves us with three important aspects influencing the “right” size of a story: clarity, functionality and effort. In the next article, I’ll explore some rules of thumb to split a big requirement into smaller parts.


Keep learning

I’m happy that there are apparently still people writing articles about Scrum that are not ranting and venting about how agile sucks: Colin Higgins explains that Scrum is not our saviour. It basically boils down to the old no silver bullet saying:

No, SCRUM, or any of the other agile methodologies will not just magically fix your throughput problems after you adopt every facet of them religiously. These methodologies will just show you how badly your development practices suck.

I agree completely. But there are some agile development practices or tools which can help you. Go look up extreme programming (XP) and what it urged developers to do. CI/CD, TDD and pair programming are by now widely accepted development practises which can help, because they aim at bringing up problems way before they end up in production.

However, there is a boundary to what these tools alone can offer. A lack of knowledge of the technologies or a general lack of problem solving skills are fundamental issues which you can overcome only by learning and gaining experience, if at all. Another quote from the linked article:

Agile process alone won’t fix your development problems, you need technical excellence for successful projects. The two are necessary for a continually successful organization.

I would agree in principle, but there are two important issues here:

1) “A fool with a tool is still a fool” and “technical excellence” are the two extreme ends of a large scale. It’s the same as with car driving: most developers like to see themselves among the top 5% of drivers developers, but as we all know there is not enough room for everyone within the 5%. Still, too many developers would shrug the call to technical excellence away, because they believe they are so great or they have no idea what technical excellence actually means. For these developers, there is always someone else to blame.

2) Requiring the proverbial “rock star developer” is — for reasons discussed above — in my opinion not a viable option. In other words, while Scrum might not be a saviour, we’re not doing anybody a favor if we instead declare “technical excellence” as the silver bullet that can lead to success. Yes, we should strive for technical excellence and do our best to improve our capabilities (technical and otherwise). This is what the article rightfully demands of developers: do the best you can and try to continuously improve. But there still needs to be a middle ground, where it’s possible to run projects / products successfully with the people you currently have. Agile helps with that because it makes clear what is realistically possible and it aims for continuous improvement, e.g. with retrospectives. To repeat the quote from above: “These methodologies will just show you how badly your development practices suck.” That’s a great starting point. It’s a learning opportunity. Go use it.

Note that this is important not only on a personal level, but also on an organizational level: Scrum Masters should try to change the company culture to value learning. This requires room for experiments and for failures, where room translates to time and safety (i.e. support). On the sprint level, this might translate to using spikes, but on an organizational level it might involve things like prototypes, split testing and other ideas from the lean startup camp.

Categories: Agile
Defined tags for this entry:

Agile teams: organization should follow work

I recently did some coaching on how agile teams work, what roles there are and what are their respective responsibilities. It’s probably no surprise that one of the main topics was self-organization and what this implies for how agile teams work, also how to get there and how to solve the various puzzles (like distributed teams) occurring when working with bigger team setups. You’ll find the slides below. I was asked to talk about these issues because the team was having trouble understanding what was expected of them and how to organize themselves. In my experience, that’s a quite common situation to run into when the team members (or at least some of them) have no or only little experience working in an agile setup. But even when you already have some experience, it might happen that the team or at least important members of it lose the understanding of how to work together. And of course, that’s much more likely to occur when you interrupt the team working together, e.g. by putting some people onto different jobs, even if only for a short time. I guess this is what people outside of a team (i.e. managers) often forget: teams are usally only able to perform great over a longer time if they can focus on “it”, i.e. working on their tasks and working on their understanding of how to work together as a team.

Anyway, I started by thinking about why people are always so eager to talk about roles and responsibilities. I came to the conclusion that it’s mainly a matter of safety or lack of it. That’s why you have these discussions mainly at the beginning of a collaboration. The important point that became apparent to me is that roles and responsibilities don’t matter in a performing team (performing in the sense of Tuckman’s model). And the corollary to that is: When you don’t know what is expected of you (as a team member), your team is really just a group of people and you don’t know how to work together as a team (yet). Let me elaborate this: I believe for a group of people to work together successfully it’s most vital to have a common understanding of what needs to be done and to just do it (as the old sport company’s saying goes). Of course, you will need somebody who will just do it, but does this imply that you have to have a formal org-chart upfront that tells you who has which role? What if you happen upon things that need to be done that you did not think about when compiling your org-chart? You will need somebody to step up anyway, so why not just leave it at just do what needs to be done? Even better if all of your team have a common understanding on how to do things (cf. for example a commonly agreed upon definition of done).

My main (and very obvious) advice on solving the issue of “I don’t know what is expected of me” is to compile a list of things that need to be worked on and to get started. Whether this “list of things” is a backlog or just a todo list, whether it’s on a physical board, in an Excel file or in some other system, is not as important as actually working with this list: re-visit this compiled list every so often to update it, prioritize items, move things around, augment and transform it as needed. And of course, remove things to do from the list that are actually done. But do this in a way that it is clearly visible all the time to all team members. If you modify this list openly so that everyone can participate, everybody will learn about what is going on, you can have discussions that will help team members understand why something needs to be done first and which things are problematic. And, last but not least, people will communicate over helping each other out, give input on difficult questions and work together.

The other important ingredient for me is giving and receiving feedback. If you’re working in a Scrum-like fashion, review and retrospective meetings at the end of an iteration are the ideal places for this. I believe that the awkwardness that in particular the retrospectives have for people new to agile is an indicator of a company culture, team setup or personal attitude that does not honour open discussions and feedback. Retrospectives make people feel uncomfortable especially if they are feeling insecure. However, without feedback it’s difficult to learn, so organizations should in my opinion strive to provide safe environments for their people. A simple example of this is to have the retrospective facilitator use the “setting the stage phase” to ensure that the people feel safe (cf. this article on creating safe environments for retrospectives), maybe by having the team come up with rules for safe communication on their own or by simply reminding them of already existing rules. However, the important point for me is that team members should learn to give and take feedback in any situation. No, I don’t mean that it’s appropriate to interrupt your colleague every five minutes to complain about some weird code or that you should accept getting interrupted every so often. But, e.g., if Maria believes there are many issues with the code that Peter checks in everyday, it should be okay for both of them when she arranges a discussion between her and Peter on how to solve the problem. If Maria can provide her feedback in a way that is both personal and helpful, this will help building trust which leads to a feeling of safety wrt. working with each other. Typically, such discussions in between (as well as during retrospectives) will address many of the fears, uncertainty and doubt that trouble people about what is expected of them.

I believe that discussions about how a team should organize itself and surrounding tools for this (e.g. the RACI model) are mainly waste. Remember, during the time you’re discussing your organization, you don’t add anything of value for your clients. I would rather spend the time on making progress on the tasks that are waiting for attention and figure out how we work together while actually accomplishing something.


Tiny cities made of ashes -- stories of agile maturity

Tom Boulet has recently triggered a discussion in the Agile+ community on Google+ about user stories, cf. the following quote from his inquiry:

A couple weeks ago, I heard someone say “The stories weren’t detailed enough for the development team..” I thought, Wha? Stories are meant to be high level and if you use acceptance criteria this comes after the team starts looking at them, doesn’t it? It sounded like the person felt that Stories had to be “fleshed out” somehow before development could get started. […] Also, I’ve recently seen talk of how stories should be small enough to code in a day or two (I’d call that a task) and another reference to “codable stories.” None of this sounds right to me. It sounds like a large number of Agilists are seeing stories as part of the design pre-development. Has the definition of Story shifted on me?

The resulting discussion is highly insightful. I particularly like one reply by Alan Dayley, which I quote in some parts:

It is hard to describe in brief sentences all the forces that pull organizations to keep them thinking in lots of details. A few might be: - A culture and history of Big Design Up Front that makes it hard for feature definition to happen just in time. - A continued need for long range planning requiring estimates for work that will possibly take months before it actually starts. - Skilled technical people who are accustomed to talking about technical details instead of the needs of the user. And more…

The most powerful and subtle force to cause “user stories” to grow is a continued lack of focus for the people defining features and those creating the features.

For example, if I am Product Owner of 5-12 products working with 3 teams who are all working on multiple products, I am not able to think of a feature, collaborate on it’s definition as it is built, see it created and then think about the next feature. I have to document my thinking of every feature because one hour from now I have to think about other features that also might not be built right away.

The key to being truly Agile is to finish stuff. The more inventory of ideas, features, undeployed code, etc. that we have, the less Agile we can be.

This highly resonates with my thinking. But it also got me thinking and reminded me of another discussion I participated in in one of Xing’s agile groups: for many companies, being able to keep stories high-level, have the Product Owner engage with the developement team just in time and clarify all those nitty-gritty details in short-enough time that the team can implement it, sound like the wet dream of agile heaven. I, too, have mostly seen teams in which there was substantial time invested by the Product Owner (at least), often accompanied by the Scrum Master and some development team members, to “groom the backlog” and to prepare the stories for the next sprint. This preparation, which more often than not involves adding lots of details according to some definition of ready, typically also includes story splitting. I’ve seen development teams straight out refuse to work on stories “too big” according to some arbitrarily set size limit. I guess the reasons behind this basically boil down to fighting FUD (fear, uncertainty and doubt), cf. this article listing potential benefits of small stories:

There are lots of benefits from having smaller stories.

From a user and usage perspective, reducing the story size forces me to understand users and usage better. It forces me to separate what really is valuable or useful to the users from what’s merely interesting. From a release management perspective having lots of smaller stories allows us more flexibility with planning, prioritizing, and cutting scope. From a development and detailed design perspective, a smaller story often implies we know more about what to build - there’s less unknown, less risk – less risk of the development work mushrooming into something much larger.

There are two obvious problems with this:

  • The development and release management perspective and their respective goals are not a good match for the business value that is driving a story. As a result of this, when you decompose stories into slizes too thin, the business value proposition, i.e. the “in order to ” part of Mike Cohn’s classical user story template, often becomes very awkward and only understandable in the direct context of some user interaction. Quick, tell me, what is the real business value of “As an order maker, I need to enter my credentials to access my orders” (cf. Godjo Adzic’s rant that writing “As a user” doesn’t make it a user story)?

  • The other problem is that often “understand users and usage better” is actually design work, as also asked by Tom Boulet in the discussion: “Can requirements really be separated from design? As requirements become more detailed don’t they really become design? Does this constrain the design part of development? […] And ultimately, can good design really be done without coding trial and error? Does a customer and PO really know what they want up front?” Here, the article by Mary Poppendieck on the Product Owner problem is helpful: according to her, the most important role of the development team is product design:

    The entire team needs to be part of the design decision process. Team members should have the level of knowledge of the problems and opportunities being addressed necessary for them to contribute their unique perspective to the product design. Only when decisions cannot be made by the development team would they be resolved by a product leader. The main team-facing responsibility of the product leader is to ensure the people doing the detailed design have a clear understanding of the overall product direction.

It would seem a development team with a “definition of ready” that requires lots of details and splitting stories into thin slizes is lacking this “clear understanding” and arguably refusing to take responsibility for designing the product. Again, my gut feeling is that this is a sign of the maturity of an organization, wrt. to agility: e.g., building this clear understanding typically requires a longevity of development teams which often clashes with the “project-based” approach taken by many organizations. And if development teams don’t know the market, the customers they are building the product for, where should this understanding come from?

In the discussion, Heiko Stapf also referred to the agile principle of “individuals and interactions over processes and tools” in the discussion, noting that “Having PO/Tester/Techguys sitting together finding a common understanding and language” is more akin to favor individuals and interactions, whereas “having the PO writing down requirements and passing them (in written form)” is more about process/tool. I find this to be indeed the case, where sometimes the underlying cause is that it’s difficult to arrange for enough room (time) to have the necessary interaction. But it might also be simply lack of collaboration, i.e. when developers just feel it’s not their job to work on (the formulation and details of) user stories.

I’ve also seen teams run into problems with getting the right information in time. IMHO they are either due to lack of knowledge on the Product Owner’s side or lack of empowerment. In the former case, when some detail needs to be clarified, the Product Owner might not have the necessary domain expertise required to answer the question right away and she needs to go back to some domain expert (which might not be immediately available, delays occur etc.). The later case is where the Product Owner has an opinion on some matter (e.g., whether to use this or that design) but feels she needs to go back to someone major stakeholder (potentially multiple). Also, when the team (PO+dev team) haven’t yet found the path to close collaboration, the resolution of some issue might require too much time to allow for implementation. In such scenarios, the idea of using a definition of ready with small stories is actually aimed at ensuring that some work can actually be finished in some time box at all (this is linked to ensuring a story is “feasible”, cf. Roman Pichler’s take on the “definition of ready”.

Of course, there are also legitimate reasons to split stories which are linked to other organizational issues: e.g., when the story is big it can’t possibly implemented by a single team in given small time box and hence needs to be split over multiple teams or multiple time boxes. Or when the requirement is yet more an epic than a “ready” story and when split, business value can still still be obvious for the smaller stories. And then there is also the process where you start out with a vague “feature idea” and need to understand it more before it would be ready for implementation, as I’ve described in my post on agile conception.

In summary, I would say that a large backlog with lots of tiny stories is a clear indicator of a lack of maturity of the team and / or organization. In an ideal world, all our teams would be mature enough that nobody would ever want to have too many detailed written specifications upfront, but alas in our reality we have to work hard to get there. Unfortunately, there is no easy answer to this than to try and persist: gain the knowledge and the confidence, build the tools and the trust you need.

ObTitle: Modest Mouse, from “The Moon & Antarctica”


Moving targets -- on agile conception

Two things are clear for every project and every product: there is always a start and an end. I’ll ignore the end for now and would like to take a closer look at the start. One has to wonder: where do all these projects or new products come from? Obviously, somebody finds out at one point that there is a need for something and this something is the nucleus of the perspective development. If this idea requires significant effort, it’s typically necessary to convince the people responsible for the money that is worthwhile to pursue the idea. Let’s take a closer look at what this something actually is and how it changes and needs to change before the developers can finally go make it real.

To use a more descriptive tag, I will use the term feature idea for now, although we could well talk about ideas for a new product or product line. Feature ideas in a very early stage are rarely more than a single, simple statement or even just a few descriptive keywords like “I would like a button here where I can save my settings” or “remember me functionality”. Of course, whoever came up with the idea has typically second thoughts, more or less fuzzy but these are not communicated, neither discussed nor finalized.

A feature idea

Quite often these second thoughts are related to some existing systems or solutions, even if it’s just to describe what the respective solutions do not provide (note that these solutions might not be your own). Typically feature ideas are about some specific expected behavior of some system and basically boil down to what needs to be build. This is all good: feature ideas don’t need further elaboration or lengthy descriptions in an early state.

How do you proceed from here to the nitty-gritty details that finally need to be done? In traditional settings, to get from the idea for a larger feature to development you need to jump over the budget hurdle, which usually requires detailed analysis results, including business cases, return on investment calculations, and implementation plans. The latter consist of solution outline, milestones, resource plans and cost requirements. It is important to note that to come up with all this substantial efforts need to be invested already. The bigger the need for precise information, the bigger the required investment. And then at some point you hopefully get the okay to move ahead. But in case the idea is turned down, all this investment in detailed plans was just wasted. In agile setups, we would like to avoid that waste. Still, of course, the need for getting a decision on where the business wants to invest its money does not go away. But in order to do so, does the senior management really need all those details? They don’t. There is a need for accuracy, not for precision (cf. for a description of the difference). Asking for all the details is a sure sign for lack of trust, but I have never seen any senior management trying to verify that all those precise details were actually accurate, which probably explains why the demand for documents and spreadsheets is all there is. This is then typically combined with taking all plans as promises that need to be kept and not as means to enable successful investments (command-and-control vs. plan-to-replan).

What is surely needed boils down to just one thing: understanding the value of an investment. It’s often much easier to determine what you want or can invest than to precisely estimate upfront what you need to invest (cf. this discussion of the no estimates idea which shows the wonderful relevant Dilbert strip on the same issue). On top-level, investment is essentially tied to strategy: e.g. in the Scaled Agile Framework (SAFe, cf. Leffingwell et al.), investment themes determine the relation between budget (resources) and the program portfolio of business epics. This essentially requires an understanding how each possible business epic or feature idea contributes to the overall company strategy (this is basically applying the Lean principle of “optimize the whole” to feature ideas).

We essentially danced around the big elephant in the room now: feature ideas often describe expected behavior, but we need to know about the value of an idea. But if all we have is a simple statement (and not the full-fledged analysis we would like to avoid) of some idea, how can we know its value? The essential insight is that it might not be necessary to understand the precise value yet. The earlier in the process we are, the less is it necessary to perform lengthy and detailed analysis: determining the value of some idea is only a means to further a decision on where to invest further. However, at least it should be possible to judge the value of one feature idea in relation or better said in comparison to other ideas, as Johanna Rothman discusses in her article on why cost is the wrong question for evaluation projects: at least in an early stage, it might be enough to use a very simple kind of rough value sizing, e.g. putting an “M” on one feature idea in comparison to a “L” for another. It is important to recognize that the value of ideas will change over time: in particular so when looked at in relation to other things your business needs to think about, as new ideas come up , some ideas get implemented and other discarded. Accounting for these changes over time is why planning to replan is important in agile methodologies. An important concept in agile methodologies that helps making this clear is the cost of delay: the idea is that you try to understand or predict how the value contribution of some feature will change over time, in particular what impact not pursuing the idea will have. This is most useful for prioritizing by cost of delay of what to invest in — obviously, as CoD is all about loss of value over time, as time moves on you want to reconsider your priorities (where you want to drill down further with an analysis etc.). (An alternative coming from the lean camp is to use an impact-effort matrix, but such a matrix needs an estimate about the effort required and also treats value as an absolute over time, which it usually isn’t. Cf. this article about cost of delay vs. WSJF in SAFe for a discussion why cost of delay and value considerations are not the same).

Inevitably, when you try to think about the value of some feature idea, you will think about what this simple statements like “remember me functionality” should actually mean. Very often the way in which we name or describe a potential feature is often a confusion (and at best a conflation) of problem and solution space and that can make it quite difficult to judge it’s value. As such, naive feature descriptions can quickly become a big impediment. For one thing, they are — due to being explicitly simple — usually ambiguous and / or often hard to understand. Does “remember functionality” describe that I stay logged in on the website or does it mean that the website remembers the visitor and shows specific ads related to his interests or both? This is often the reason why some people attribute completely different value to an idea. Worse, they typically don’t give any clue about the need behind the wanted behavior nor about the value this behavior brings. Just as bad is that simple statements of the feature idea in terms of expected behaviour (“I want a button that …”) limits creativity by already pointing in the direction of a possible solution: maybe there are other options which are much better suited, both from a usability point of view but also with regard to the required functionality. So the conflation of problem and solution is actually problematic for understanding the problem and for designing the solution.

So what is the process to move from a feature idea to the solution design? I think it’s useful to see feature idea statements as promises for analysis. The analysis of a feature idea should essentially focus on understanding the user and / or business needs the idea is related to. The stereotypical user story template “As a <type of persona>, I want to <perform X> in order <to achive Y>” is a useful tool in this respect as it is explicitly constructed to include and name the need (cf. Gojko Adzic rant Writing “As a user” does not make it a user story how this can go wrong when taken on too easy). Now you might not have such a story or might not have it yet and maybe it’s not clear what this simple feature idea is about. That’s the time to engage with whoever seems able to contribute, but first and foremost it should probably be the primary beneficiaries of the feature idea. Traditional requirements engineering practice would suggest to involve all relevant stakeholders (perspective users, management, marketing, legal department, …) and to employ whatever RE tool seems reasonable to use (e.g. brainstorming, workshops, open interviews, etc.). In agile setups, determining who actually is a stakeholder which needs to be heard is the responsibility of the product owner. I find that it is usually also a good idea to involve the team, because they will typically have a good understanding of the solution space and should also know about the envisioned user needs. Don’t take it too far, though: If you are still trying to analyze in order to understand the value of some idea, i.e. you are still at an early stage (inception phase as the disciplined agile delivery folks would have it) of your planned feature, you should probably try to keep it light-weight.

You will likely get a lot of input on your feature idea and many in the form of comparable simple statements to what you originally started out with. Understanding the needs requires going beyond the level of simple statements or simple suggestions of simple solutions (Laura Klein has a nice description of how difficult that can be: Sometimes users will tell you that they want a toaster in their car, when what they really mean is that they don’t have time to make breakfast in the morning.). And although the famous 5 Whys might come across somewhat strange or invasive in the course of an interview, it might take that many attempts to actually understand the problem that needs to be solved. If you’re lucky, a potential user will offer some simple insight as to the problem with simple statements such as “I would like a button here so I can save my settings, so I don’t have to adjust them again and again”. Further analysis might reveal that the real problem is that there is a usability problem and the user forgets to adjust the settings too often or that the user is a regular user which would like to spend less time with repetitive tasks. The main point here is that this process is about understanding the needs, which typically requires challenging any superficial and obvious ideas: this is both about maximizing the amount of work not done (simplicity) and about building solutions that satisfy the user.

Change in understanding over time
Change in understanding over time

This is now where the team basically takes over: the team is responsible for working out the solution, not only constructing / implementing it, but essentially designing the solution. This is not to say that the product owner or other stakeholders cannot or should not contribute ideas on how to solve it, quite to the contrary. It is quite essential to have stakeholders around to enable the knowledge transfer of needs from the users or stakeholders to the team. So while the separate paragraphs of this text might suggest a handover from product owner to development team, from needs to solutions, ideally this should be a smooth transition with lots of participation of the entire team and face-to-face communication. Tools like Roman Pichler’s product canvas are useful, but there is a reason why the agile manifesto recommends people interactions over tool usage. Again, for working out the solutions many ideas and tools can and should be exploited, from brainstorming over UI scribbles to models and prototypes. Typically when designing the solution, many alternatives come to mind and different people from different backgrounds will have different opinions and favorites. This is actually good, because you can embrace this diversity to think about the best possible solution. However, the best solution might not actually be the solution you want to implement, often the simplest thing that could possibly work is a reasonable way to move forward: this minimizes the effort required, thereby enabling faster time to market and thereby enabling earlier feedback on whether this solution actually works (this is the tension between trying to build the best thing vs. learning fast from feedback about what actually proves to work). People with lots of experience in designing solutions will typically also come up with all kind of challenging questions which might require drilling further down into the needs (e.g. questioning if the feature is targeted at the right people or what the legal department has to say to some feature). This will trigger more learning about the feature under consideration. But this should not be taken as an excuse to run endless analysis or design meetings: it is often a better approach to end a discussion with a decision and move ahead and find out later whether this decision was actually flawed and needs correction. It is often a good idea to consider the risks associated with some idea: the more risk, the closer you want to consider your options. In general, when in doubt about which option to chose, think about what data would be helpful to decide: performing light-weight experiments might be useful for minimizing these risks (e.g. technical spikes or usability tests with paper-pen sketches).

A brief terminology recap: an epic is just some requirement (feature idea) that is so “large” and / or uncertain that it needs to be broken down, i.e. nobody could take an epic and just implement it, the developers just have to come back and ask lots of questions. If you wanted to you could say after breaking down an epic, all you end up with are features but in agile circles these are usually split up into even smaller stories (cf. the image below). Of course, this is a completely arbitrary and often also highly context-dependent distinction. It’s also fairly common to use the user story template for describing epics. I used the term feature idea instead of epics to bring in some notion of time: epics start out as (feature or other) ideas. If you think about the timeline again, it’s clear that the closer you are to implementing some feature, the more details you need to know, i.e. the higher your level of understanding of the feature needs to be. And this is where the idea comes into play that you need to INVEST in your user stories, so that you can derive SMART tasks. This is obviously needed for the very next things you want to implement, and as any risk-aware tech guy or project lead will know, probably also a good idea for the next most things to do afterwards. Business cases are mainly needed when you are going to go for big investments and when it’s critical to understand the return on investment. But this brings us back to the discussion of value vs. cost and the discussion of precision vs. accuracy. Use cases, finally, are a useful tool when it comes to understanding some set of requirements in the context of a larger system. They can also be useful for driving discussions with stakeholders and are often a useful starting point if you like to model your problem / design before implementation. I see use cases as a useful tool on the feature level, to cover a broader range of more detailed stories. As such it does not matter much when to describe a use case: if it helps you drive your understanding of some problem, use it before drilling down to more fine-grained descriptions. If you already have a lot of details and you fear to lose the overview, it can also make sense to reconstruct some use case from more detailed stories.

Uncertainty vs. time
Uncertainty vs. time

Finally, let’s step back briefly and re-consider who is part of this solution team. It’s quite common that for feature ideas of different sizes, different people will make up the solution team. I.e. when your feature idea is concerned with some detail of some larger system, the team might just be the developer team (in the Scrum sense). For bigger features, projects or programs, it’s quite common that lead developers, design and usability specialists, solution and / or enterprise architects and maybe operational staff will play an important part in working out the solution. Whether these people are working in specialist teams (as e.g. SAFe assumes) or are part of the development teams is not of much importance, but having members of those teams working on the solution design is quite essential: if this is not the case, in the best case, the final developers will only have to rediscover and understand all the good reasons why you came up with the solution in the first place, in order to get it right. Worst, you might end up with frustrated developers which don’t take responsibility and don’t care about the build solution, because they feel they have no saying in its design.


Page 1 of 2, totaling 15 entries