There is a lot of discussion in the Agile community about estimating defects. A lot of people seem to think you need to estimate defects. There are lots of people arguing about how to estimate defects. I think that estimating and assigning story points to defects is an anti-pattern and a dangerous practice for three simple reasons:
Estimating the story points for a user story is a tricky and annoying task. (If you’re not sure what story point estimation is, I wrote an article explaining what is story point estimation). I believe it is one that adds little value. It is more difficult to do for a defect, because, by definition, the solution space of a defect is unknown. Remember, a user story is a problem definition, not a solution definition. It describes a behaviour that does not currently exist. The scrum team decides on an implementation, or solution, that will bring about that behaviour. If they do so, and it meets the acceptance criteria, then the story is complete and the story points are earned.
The story points represent progress towards the value that the product owner specifies. If at some point after QA has taken place, the user story stops meeting those acceptance criteria, a defect can be raised. That defect represents work (not value, work) that must be done to restore the behaviour of the user story to that defined. If the product owner wants some more behaviour for the user story that was never defined, that’s a new user story, not a defect. The distinction is important.
It is unlikely that the developers will know immediately what work must be done, i.e. the solution definition, to fix the defect. If they knew automatically, then they presumably would have implemented that solution in the first place. The developers will probably have to invest a fair bit of time investigating and debugging to figure out what is causing the defect. And what therefore must be done to fix it. In my experience, that investigation and debugging takes up half or more of the total time spent fixing the defect. Estimating the amount of work before that investigation takes place is therefore a waste of time, since that time is a complete unknown.
The number of story points on a user story represents complexity or difficulty. But the earning of story points is tied to delivery value. That’s when you earn the points. The team’s progress in delivering stories provides a guide as to when the overall value of the release/s will be achieved. For example, a product owner defines a set of stories and features that add up to (when estimated) 100 points. If a team has delivered 25 points in a sprint, they have delivered one-quarter of the scope specified by the product owner, and odds are they will complete the work after three more sprints, assuming their velocity stays constant.
What if after delivering those stories, they unearth defects? Let’s assume all of those stories now have breaking defects, i.e. they now no longer deliver any value. If the team estimates 25 points of work to complete those defects, and they spend the second sprint doing that work, and fixing all those defects, should they earn the points? If they do, the data suggests that they have now delivered 50 points and are therefore halfway towards their goal, but they are not at all; they are exactly back where they were at the end of the first sprint. There is effectively a mismatch between their burnup chart (which measures points delivered) and their burndown chart (which measures points remaining).
To make it more extreme, imagine that the stories then broke again and required another 25 points of defect fixes. If you count those points, the team has now achieved 75 points. But the team has only delivered one-quarter of the value defined in the backlog. This is an unrealistic example, but it shows the issue: points are an indication of value, i.e. stories delivered, not work or effort.
Simple: track the defect work however you like. Maybe use the same tool you use to track your user story work. It could be a digital tool or a paper VMB. Just don’t assign points to the defects. You might be thinking “but shouldn’t we award the team for all the work they do fixing defects?”. If you’re thinking that, you’re thinking about story points in the wrong way.
One of the most common responses I see to these arguments is “but the team should be rewarded with points for the work they do fixing defects!”. This shows a misunderstanding of what story points and why we use them. Story points are not gold stars. They are not a “reward”. They are not a pat on the back, they are not a measure of “skill” or “efficiency”. Instead, they are used to track and predict progress. Nothing more, nothing less. The progress you are tracking and predicting is progress towards completing and delivering a discrete set of user stories. If you spend all your time fixing defects, you are not making much progress towards that.
If the developers fixing defects are getting upset because they are not being “rewarded” with story points, you have some big problems. The reward is the completion and delivery of software, not the “awarding” of “points”.
You’re don’t earn points for work, you earn points for value. Stakeholders want to know when the team has delivered value (potentially) to customers. They don’t care how much work the team does to get it. The fact that the team works on defects (i.e. restoring value that was lost), instead of stories (creating value that was not there) will be shown by a fall in velocity. When stakeholders ask why velocity has fallen, tell them it’s because you’re fixing defects.
We value transparency in Agile software development, remember. You don’t want to hide defects stakeholders, you want to make them visible. Clearly, the way to keep your velocity up is to reduce your defect count, ideally to or close to zero. This can only really by done by lots of test automation, combined with principles of continuous integration and continuous development.