I’ve been a Scrum master for quite a while now (and yet, I still feel like a beginner sometimes), and because of that I do see and understand the clear value of the Burndown chart. Not everyone sees it like that. In fact, many of my team members will rightfully say that it is a misleading metric. I agree and disagree at the same time. What if we could provide more context to it?
The burndown chart is, as we all know, a timeline with all the committed points to complete in a Sprint. By seeing its evolution, we can more or less predict whether the Sprint is going well or there are risks ahead. Some teams will track it in time, others in points, or in many other possible options. In this case, I’ll focus on the points. Here’s an example of burndown chart from one of my team’s sprints a few months back:
Well, it is a good example of a sprint that did not go well in the end. The scope changed almost right away, things did not progress much, and some points were burnt near the end but not all. This should already have been a good indication as the Sprint progressed, but we could have added much more context had we known the status of the stories at each stage.
That is why I add a second dimension to my burndown charts by creating a issue status chart within the original burndown chart. The difference is, I do it over a percentage rather than story points. This way, I understand better where we are when it comes to stages of development. This is the same burndown chart based on the issue status:
Obviously, we do not see the scope change that is visible in the previous burndown chart, but nevertheless the percentage is affected. In here, you can clearly see that the green area above pretty much matches the burndown chart. The interesting bit is what happens below. What percentage of stories are still “open” or not started? How many of them are in progress? And in QA?.
I can read the following things just by looking at this issue status chart:
- By the 30th of August, not many points were completed.
- By the same date, we clearly had a bottleneck in QA.
- By the same date, despite not many points were completed, many things were in progress. Maybe we should limit WIP and get the QA stories done?
- By half sprint, there are still quite a lot of stories had not even started. Probably a consequence of the above.
The added value of this dimension is when you do it in real time, not after the sprint has finished. But once it has, we can read lots of things just by looking at it. My point is: a burndown chart has a lot of added value per se, but it is as important if not more to identify the status of the WIP during a sprint, so we can check where the potential bottlenecks are. As a Scrum master, it is my duty to question and ask why are there so many elements in a specific status, and to whether they can be moved forward. Limit the WIP. It isn’t always possible, but it helps in the task.
I like to represent this in the daily status, and also in the timeline. This way we know where we are in terms of completion percentage. And helps the team focus in the right places as well.
With the right tools, getting these metrics is easy. And the added value to the stand up, the predictability of the sprint and the reports is just phenomenal. I intend to expand in the near future these metrics by including the percentage of blockages, the right statuses and many more things. But it is also important not to over-complicate it. The chart, the colours and the metrics must give you information with just a single look.
I had been using this way of expanding the information of the Burndown chart in this fashion for a few months, but lately during the last Agile Cambridge 2017 I attended a fantastic talk by Cat Swetel called The metrics you should use but you don’t where I was introduced to concepts I had never heard of, or I probably didn’t understand too well, such as:
- Customer satisfaction as the first metric we should use, in general.
- Promise of delivery percentage based on the maximum time in progress as an average.
- Trying to understand predictability by analysing the context of our environment first.
- Little’s law, where Average time spent in a system = Average # of items in a system / Average throughput
- Etc.
The bottom line is: Metrics do matter. We cannot aim for predictability if we cannot measure the size of the work we do. As a Scrum master, this is one of my passions, but I am sometimes alone in this quest. Because not everyone understands, or because each person has a view on how things should be measured.
What do you do to get your metrics right?
1 comment
so true… as a scrum master, i also feel alone when it comes to metrics