Velocity: the Killer of Agile Teams

Tomas Kejzlar
Skeptical Agile
Published in
5 min readFeb 26, 2016

--

Last week, I wrote about the value of estimates and estimations. My post provoked some good comments and reactions, some of which focused on the agile metric — velocity and how should / could velocity be used. So, here is a loosely coupled sequel to my original article.

What is velocity?

Velocity in the agile world is a metric that can give the approximate “speed” of teams. After a couple of sprints in which you have a stable team and are working on a single product, if you average how many story points the team delivered, you get the average velocity of the team. This of course assumes that you are estimating your stories using a numeric range.

There are several conditions you should meet for velocity to have at least some meaning:

  • the team needs to be stable (= must have the same people, preferably all full-time allocated),
  • the product needs to be the same and it also needs to be stable (= no radical changes in technology, no working on multiple products at a time),
  • the stories must be estimated and the estimates must be comparable (= you should have the same reference point),
  • there are no major production problems, organizational changes or anything that might impact the actual capacity of the team to deliver the product.

You can compensate for some external factors while calculating the velocity, but the more assumptions you make, the less useful the resulting number is.

Velocity is internal — a “team-only” metric

Often, velocity is used as a tool to “check-up” on teams — it is used for things ranging from the product owner promising delivery of some long-term features based on the velocity to presenting it to stakeholders and customers during sprint reviews.

This is all wrong and potentially harmful. Velocity should be an internal metric of the team, used as the team wants to use it:

  • as a hint during sprint planning — although I’d prefer discussing all of the stories that team puts into the sprint backlog and committing to the stories themselves, not to a calculated velocity,
  • as a data source during sprint retrospective — huge variances in velocity or declining velocity may be a problem that the team wants to talk about.

Exposing the velocity outside of the team, let alone measuring a team’s performance using velocity is extremely dangerous. And I have two good reasons for that, both of which you might encounter when you try to use velocity as a measure of teams’s performance:

  • The Hawthorne effect — as demonstrated, once you set up a metric and people know they will be evaluated against it, they will do whatever is needed to improve or to achieve a set goal, however damaging it may be to other areas of their work. In case of velocity, this may lead to teams gradually increasing their estimates so that the velocity grows. Or it may mean compromising quality, focusing only on new stuff and not taking care about bugs. If you have multiple teams, this may also lead to a competition and loss of collaboration between teams.
  • The Goodhart’s Law — coined by the economist Charles Goodhart, this law simply states that when a measure becomes a target, it ceases to be a useful metric. And this might happen very easily if you want to measure and reward teams based on their velocity.

Apart from these, my strong belief is that the fact something can be measured does not mean it should be. From my experience, many of the simple and easy-to-measure hard metrics serve no purpose and may actually damage the culture in your team.

Velocity does not equal productivity

This is the case of velocity. Taking velocity as a performance measurement completely bypasses one of agile fundamental principles — that we should deliver value to customers (or: that our primary measure of progress is working software). Velocity does not correlate with either, because:

  • it has no link to value: increased velocity may only mean you are delivering more and more crap to your users,
  • it may lead to accumulated technical debt, compromised quality and other problems: things that I would never consider as being a part of working software.

“Productivity” of a team is much more effectively measured by the value team delivers to customers and by satisfaction of these customers (for more on productivity, you may want to read my recent post Pushing Productivity Makes No Sense).

There are other (more useful) metrics

The first one is rather obvious — if you have your stories of roughly the same size, a mere count of stories finished per sprint will do you the exactly same job as velocity — so, you do not need to have estimates to have some sort of velocity. This works well even if there are differences in the size of the stories provided there is variance — once you average the count after a couple of sprints, you still get a reasonably accurate number.

But is this metric useful? Again, it can be useful for the team as an internal metric during planning or retrospectives, never as the primary focus but rather as an aid. So, I propose that if you are looking at something to evaluate the performance of your teams, you start looking at something like this (don’t try all of them — agree in your team, start with one and then add others):

  • the value you deliver to the customers — are the stories you develop of any use to the customers, do they help them in accomplishing their goals?
  • satisfaction of your customers — do your customers generally like using your product, do they find it easy to use, time-saving and really helping them in whatever they want to achieve?
  • technical quality of your product — are you introducing technical debt, or are you producing hacked, half-baked features and subsequently deal with lots of bugs?
  • happiness of the team — are you happy on the team, do you think you are doing some meaningful work and learning new stuff?

You can divide these into smaller metrics you might want to look at and keep track of. Just remember that every metric you introduce should be something you use to improve, not something you target. So always ask the following questions when thinking about introducing new metrics:

  • what is this metric good for — how would we use this metric, what are we expecting to learn from that, what improvements can we make based on it?
  • is this metric something internal to the team, or is it something we want to share with our customers / stakeholders / whoever?
  • how long do we need before we generate enough data? (or: how long do we need before we abandon this metric if it does not show anything useful?)

In the end, it all boils down to one simple thing: metrics are there to serve you and they are never your goal. Use them wisely and sparingly and only as data supporting discussions yo make. Metrics don’t tell you what do do, how to improve, what to change by themselves. They can only support you and guide you.

--

--