Six Dimensions for Decomposing Stories
Many teams struggle to break down stories to a size that can be completed within a sprint. Perhaps the product needs seem too large, or the technical debt is too deep. There could be a high degree of uncertainty around what the product should be. Or, the team might be new to the domain. Regardless of the reason, here are a few ways we tackle this in the dojo - and given that we generally do two-and-a-half-day sprints in the Dojo it’s critical that teams develop this skill.
Here are six dimensions to think about when decomposing stories.
Many teams lack big picture understanding for products they build. Instead of having deep knowledge of the problem they are trying to solve, they are equipped only with a backlog of things to do. In these situations, it’s difficult to find meaningful decompositions of the work. Take a simple example - a need to filter a search result. Without context of how customers actually use the product, teams resort to building filtering capabilities for every element in the search result. Even if they have ideas around building filtering incrementally it’s hard for them to make decisions on what to build first without product context.
In the dojo, we focus on building product context for teams - not just around what they are building, but also why they are building it and who they are building it for. We use product chartering, personas, and story maps to ensure everyone has a shared understanding of the problem space. Teams are able to break down stories more easily when they have product context. Back to the search example – when teams have product context they’re able to say “this filter is important in the context of the search we’re implementing for this persona right now.”
Products can be complicated to build. Some products require integrations with large datasets or other systems that have less than ideal APIs or interfaces. Or, an innovative product idea that could make a user's life better may be very complex to implement.
Development efforts like this naturally take time. They are ripe for errors when estimating the work effort because there are so many unknowns. Knowledge is created quickly in these spaces as a result of all the learning that occurs during development. It’s difficult to estimate the effort and time required for this learning.
When working with teams in the dojo on these types of problems we look for paths of increasing complexity. We start with simple, obvious cases. Sometimes teams will push back and insist we’re starting too simply. (We often hear a similar objection when teaching test-driven development and we’re giving guidance on creating the first test.) Teams often over-estimate their ability to handle complexity.
For that great search feature, we will start with a direct match before we work towards more complex criteria. Later, after we have learned more about the tech and the product, the team might graduate to working on fuzzy logic.
Teams learn as they go and move gradually into more and more complex scenarios.
Using examples is our favorite technique and quite possibly the simplest. It’s sort of “Team Test-Driven Product Development”. Teams could even call it building against test cases.
When a team encounters a large story in the dojo we have them talk about the story before trying to break it down. We ask them to come up with examples of what the product would look like if the story was completed - both positive (working) and negative (failure or error) examples. From there, the team chooses an example that is interesting to them. We encourage teams to choose an example that will help them learn something by asking open-ended questions such as - Will this example help us learn about integrations or some aspect of the technology stack? Will it help us validate a product idea? Will it help us improve the way we work (e.g., will it force us to implement new automation techniques in the testing space)?
Using examples is a nice, easy way to frame discussions that helps teams make progress on decomposing stories.
Many teams in the dojo are building products that are dependent on outside influences - other teams, external APIs, or other products. Often these external dependencies influence teams into thinking one or more of their stories cannot be decomposed, and even worse – delivering their product is outside of their control.
When this happens, we talk about expected behaviors and contracts for tests. What do we expect the dependencies to return in each of the examples? How should the product behave? We help the team learn how to isolate their product in a way that enables testing and allows the team to make progress delivering. For legacy code, or systems where it’s difficult to drop in stubs or mock data, we have teams use tools like WireMock to provide some of this isolation and control.
But it all starts with that conversation around expectations and behaviors.
Be careful with this approach. Teams can get “isolation happy” and eventually find themselves actually deferring product integrations - which is a bad thing that can easily lead to over-accumulating work in progress.
Teams are sometimes working in unchartered waters. This could take on the form of new technology, unfamiliar legacy code bases, or learning something new like cloud technologies or microservices.
In these situations, teams sometimes fall into the trap of over-researching. When trying to deploy to the cloud for the first time, teams may read documentation and blogs, watch videos, and go through training materials for weeks to understand all of the options with the hope of “doing it right the first time.”
“Spiking” on stories is a common technique teams have been using for years. Again – there are traps and pitfalls. The worse being spikes with no clear end-game, even when they’re time-boxed. For example, when would a spike around “Learn how to deploy to the cloud” be finished?
In the dojo, we frame spikes contextually around answering a question. Instead of creating a spike to “learn cloud deployment” we might ask “can we move an already built jar file from Artifactory to an AWS EC2 instance?” By phrasing it as a specific question, the spike has focus and context and becomes a nice starting place for sharing learnings and demoing results.
There are times when teams try all of the above and still struggle to decompose a story to a size they can complete within a sprint. They have context, they have broken it down across a test boundary with good isolation, and so on. But the work is still “too big”.
Another approach we’ll use is to have teams talk about the investment required to complete that part of the product. We start with simple questions - How much would be too much to invest in this? How do we know? Similarly, after we have finished a story, we often ask the teams if they’ve learned anything that causes them to question the value in continuing the work. Should we keep investing in this line of product development? Should we pivot? If the overall product direction is still valid - Have we learned anything that might help us think of a different route for delivering the product?
The above approaches have worked for us, what has worked for you?