# Measuring Impact In The Dojo

Last month at Agile Day Chicago, I (Joel) had the pleasure of listening to Mark Graban speak about separating signal from noise in our measurements. Mark referenced Process Behavior Charts, a technique described in the book Understanding Variation: The Key to Managing Chaos by Donald J. Wheeler. This simple tool helps us look at metrics over time and understand the difference between naturally occurring variations and signals, or variation in the metrics representing real changes. Wheeler calls both of these (signal and noise) “The Voice of the Process,” with the key being able to distinguish between the two. Signals can be indicators that a desired change is manifesting, or they can be indicators that something is wrong and requires further investigation.

We immediately saw the value in being able to separate signal from noise when evaluating the types of metrics we’re capturing in the Dojo that we talked about in our last post. We both grabbed copies of the book, devoured it quickly, and started brainstorming on applications for Process Behavior charts.

Let's look at an example of how to use Process Behavior Charts in the Dojo.

# BEFORE YOU START

This may sound obvious, but before you start any measurement think about the questions you want to answer or the decisions you want to make with the data you’ll collect.

In the Dojo, we help teams shift from a project to product mindset. We focus on delivering specific outcomes, not simply more features . When delivering a new feature the obvious question is – did the feature have the desired outcome?

# THE SCENARIO

Imagine yourself in this type of situation…

We’re working with a team and we’re helping them move from a project model to a product model. In the past, the team cranked out features based on stakeholders’ wishes and success was simply judged on whether the features were delivered or not. We’re helping the team shift to judging success on whether outcomes are achieved or not.

We’re also working with the stakeholders and there’s resistance to moving to a product model because there’s fear around empowering the teams to make product decisions. New features are already queued up for delivery. Before we give the team more ownership of the product, the stakeholders want delivery of some of the features in the queue.

We can use this as a coaching opportunity.

The stakeholders believe the next feature in the queue will lead to more sales - more conversions of customers. The team delivers the feature. Now we need to see if we achieved the desired outcome.

Our first step is to establish a baseline using historical data. Luckily, we’re already capturing conversion rates and for the 10 days prior to the introduction of the new feature the numbers look like this:

Then we look at the data for the next 10 days. On Day 11, we have 14 conversions. Success, right? But on day 12, we have 4 conversions. Certain failure?

Here’s the full set of data for the next 10 days:

Overall, it looks better, right? The average number of conversions have increased from 6.1 to 7.9. The stakeholders who pushed for the new feature shout “success!”

## PROCESS BEHAVIOR CHARTS

Given a system that is reasonably stable, a Process Behavior Chart shows you what values the system will produce **without interference**. In our case, that means what values we can expect without introducing the new feature. Let's create a process behavior chart for our example and see if our new feature made a difference.

### First Step - Chart Your Data In A Time Series and Mark the Average

What does this show us? Well, not much. Roughly half of our points are below average and half are above average (some might call that the definition of average).

### Second Step - Calculate the Moving Range Average

Our next step is to calculate the average change from day to day. Our day to day changes would be 2, 4, 4, 2, 6, 3, 2, 5, 3 for an average change of 3.4. All this means is that on average, we see a change in the number of conversions day to day of about 3. If we were to plot the number of changes in conversion day to day, we would see roughly half above and half below - again, the definition of average.

### Third Step - Calculate The Upper And Lower Bounds

To calculate the upper and lower bounds, you take the moving range average and multiply it by 2.66. Why 2.66? Great question - and it is well covered in Don Wheeler's book. In brief, you could calculate out the standard deviation and look at 3 sigma, but 2.66 is faster, easier to remember, and ultimately tells the same story.

We take our moving range average of 3.4 and multiply it by 2.66 giving us 9.044. What does this number mean? It means that with normal variance (the Voice of the Process), we can expect conversions to fluctuate 9.044 above or below our average number of conversions which was 6.

To put it more clearly, without any intervention or new features added, we should expect between 0 and 15 conversions per day - and that would be **completely normal**.

Let's visualize this data. We add our upper and lower bounds to our chart for our first 10 days. It now looks like this:

### Fourth Step - Introduce Change & Continue To Measure

We have established the upper and lower bounds of what we can expect to happen. We know that after the feature was introduced, our conversion numbers looked better. Remember, the average went up almost 30% (from 6.1 to 7.9) - so that is success, right?

We extend our chart and look to see if the change actually made a difference.

Our average for the next 10 days was higher, but looking at what we could normally expect the system to produce, all of the conversions were within the expected range. In essence, the feature we delivered did not create a meaningful impact to our conversions.

Note, we’re not saying that *nothing* could be learned from delivering the new feature. The point we’re making is that prior to delivering the feature we assumed it would lead to an increase in conversions. Using a Process Behavior Chart we were able to show our assumption was invalid.

Now we can continue the conversation with the stakeholders around empowering the team to improve the product. Maybe now they'll be more open to listening to what the team thinks will lead to an increase in conversions.

# MORE USES FOR PROCESS Behavior CHARTS

We like using this visual display of data to help us concretely answer questions focused on whether or not our actions are leading to the intended outcomes. For example, we are experimenting with Process Behavior Charts to measure the impact of teaching new engineering and DevOps practices in the Dojo.

# REMEMBER - MEASURE IMPACTS to the WHOLE Value Stream

Process Behavior Charts can be powerful, but they require that you ask the right questions, collect the right data, AND and take the right perspective. Using a Process Behavor Chart to prove a change is beneficial to one part of the value stream (e.g., the “Dev” group) while not taking into consideration the impact to another group (e.g., the “Ops” group) would be missing the point. Consider the complete value stream when you are looking at these charts.

# FURTHER READING

For more information on these charts, as well as the math behind them and what other trends in data are significant, we recommend the following:

Understanding Variation - The Key To Managing Chaos; Don Wheeler

Lean Blog - Mark Graban, in particular this post on home runs in the World Series

Process Behavior Charts (also called Shewhart Charts) – this article talks about various patterns that are statistically significant