There is a lot of discussion in the Agile community about estimating defects. A lot of people seem to think you need to estimate defects. There are lots of people arguing about how to estimate defects. This article will show you why you shouldn’t estimate defects. Estimating and assigning story points to defects is an anti-pattern for three simple reasons:
Estimating the story points for a user story is a difficult and annoying task. (If you’re not sure what story point estimation is, I wrote an article explaining what is story point estimation). I believe it is one that adds little value. It is more difficult to do for a defect, because, by definition, the solution space of a defect is unknown.
Remember, a user story is a problem definition, not a solution definition. It describes a behaviour that does not currently exist.
The scrum team decides on an implementation or solution, that will bring about that behaviour. If they do so, and it meets the acceptance criteria, then the story is complete and the story points are earned.
The story points represent progress towards the value that the product owner specifies. If at some point after QA has taken place, the user story stops meeting those acceptance criteria, a defect can be raised.
That defect represents work (not value, work) that must be done to restore the behaviour of the user story. If the product owner wants some more behaviour for the user story that was never defined, that’s a new user story, not a defect. The distinction is important.
It is unlikely that the developers will know immediately what work must be done, i.e. the solution to fix the defect. If they knew, then they would have implemented that solution in the first place!
The developers will probably have to invest a fair bit of time investigating and debugging to figure out what is causing the defect. And what therefore must be done to fix it.
In my experience, that investigation and debugging takes up half or more of the total time spent fixing the defect. Estimating the amount of work before that investigation takes place is, therefore, a waste of time, since that amount of work is completely unknown.
You could ask developers to just make an uninformed guess, but that would not be a valuable estimate. In fact, it would have negative value, since it would suggest to some people that there is some confidence around how much work it is. When usually, there is very little confidence, if any at all.
The number of story points on a user story represents complexity or difficulty. But the earning of story points is tied to delivery value. That’s when you earn the points. The team’s progress in delivering stories provides a guide as to when the overall value of the release/s will be achieved.
For example, a product owner defines a set of stories and features that add up to (when estimated) 100 points. If a team has delivered 25 points in a sprint, they have delivered one-quarter of the scope specified by the product owner, and odds are they will complete the work after three more sprints, assuming their velocity stays constant.
What if after delivering those stories, they unearth defects? Let’s assume all of those stories now have breaking defects, i.e. they now no longer deliver any value. If the team estimates 25 points of work to complete those defects, and they spend the second sprint doing that work, and fixing all those defects, should they earn the points?
If they do, the data suggests that they have now delivered 50 points and are therefore halfway towards their goal, but they are not at all. They are exactly back where they were at the end of the first sprint.
There is effectively a mismatch between their burnup chart (which measures points delivered) and their burndown chart (which measures points remaining).
To make it more extreme, imagine that the stories then broke again and required another 25 points of defect fixes. If you count those points, the team has now achieved 75 points. But the team has only delivered one-quarter of the value defined in the backlog.
This is an unrealistic example, but it shows the issue: points are an indication of value, i.e. stories delivered, not work or effort.
Simple: track it however you like. Maybe use the same tool you use to track your user story work. It could be a digital tool or a paper VMB. Just don’t assign points to the defects.
You might be thinking “but shouldn’t we award the team for all the work they do fixing defects?”. If you’re thinking that, you’re thinking about story points in the wrong way.
One of the most common responses I see to these arguments is “but the team should be rewarded with points for the work they do fixing defects!”. This shows a misunderstanding of what story points and why we use them.
Story points are not gold stars. They are not a “reward”. They are not a pat on the back, they are not a measure of “skill” or “efficiency”. Instead, they are used to track and predict progress. Nothing more, nothing less. The progress you are tracking and predicting is progress towards completing and delivering a discrete set of user stories. If you spend all your time fixing defects, you are not making much progress towards that.
If the developers fixing defects are getting upset because they are not being “rewarded” with story points, you have some big problems. The reward is the completion and delivery of software, not the “awarding” of “points”.
You’re don’t earn points for work, you earn points for value. Stakeholders want to know when the team has delivered value (potentially) to customers. They don’t care how much work the team does to get it.
The fact that the team works on defects (i.e. restoring value that was lost), instead of stories (creating value that was not there) will be shown by a fall in velocity. When stakeholders ask why velocity has fallen, tell them it’s because you’re fixing defects.
We value transparency in Agile software development, remember. You don’t want to hide defects stakeholders, you want to make them visible. Clearly, the way to keep your velocity up is to reduce your defect count, ideally to or close to zero. This can only really by done by lots of test automation, combined with principles of continuous integration and continuous delivery.
I’ll go through some of the objections raised against this view, and my replies to those objections.
Yes you can. Just use previous sprints as a baseline. If last sprint you did 30 points of stories and fixed 2 bugs, then that’s your baseline. If you have another 30 points of stories this sprint, then you probably have the capacity to fix a couple of bugs. Or maybe if you have 25 points of stories, you might plan to do three or four bugs. They might turn out to be harder than you thought, but the stories might too. Or they might be easier. This is not an exact science and anyone who says it is, is lying.
For starters, you shouldn’t have lots of defects in the backlog. If you do, that’s a sign of a much deeper and more serious problem than estimation. You should be doing automated testing and have very few defects. And the few that you do have, you should fix in the sprint in which they are raised. Carrying them from sprint to sprint is a terrible practice that leads to poor quality and lots of technical debt.
And anyway, you can and should have unestimated work in the backlog. You shouldn’t estimate stories and epics that are far down the backlog, because they are lower priority, further away in time and less understood. They might not ever even be built. So estimating them is a waste of time. I like to only estimate the next one or two sprints’ worth of work. For anything further, I just use story counting / throughput to estimate how much work they are.
This is a strange idea. Someone suggested you can get the points for the defects, but subtract them when calculating your burnup. If you’re going to pretend they’re not there, why estimate them and put points on them in the first place? “So we can plan our capacity!” is probably the reply. But see above: you can do that without points, by just using a count (which you can also do for stories).
“But estimating helps us have conversations about solving problems!”
This is a common argument raised against the No Estimates movement: that there are lots of valuable conversations when estimating defects. Of course you should conversations – but you don’t need to be estimating to have those conversations! Just sit around and talk about them, if you want to. There is no need to then spend ten minutes arguing whether something is a 2 or a 3 – that is almost always NOT a valuable conversation.