Measuring the wrong thing

Process improvement requires measurements. How do you know what to improve if you can’t see where things aren’t working, and how do you know you’ve made the right change without seeing those numbers going in the right direction?

But measuring doesn’t mean you’re measuring the right thing, and measuring the wrong thing can lead to the wrong outcomes, and can be very harmful (see Liz Keogh’s talk on perverse incentives )

The key to the success of any metric is that it is owned by the team so they have complete control over the changes needed to affect it, so they feel ownership, and that improving the metric has a useful business outcome.

Metrics that I’ve found useful in the past include:

Number of bugs introduced by a release.

This is a tricky one to work with because it can easily be a perverse incentive, and the feedback cycle can be slow, especially with the usual 6-12 month release cadence. However, on one waterfall project I took over there was a strong negative perception of the quality of the code, and big count was a useful proxy as the customer had access to JIRA so the big list was already visible. Reducing this number was a combined effort where testers and developers had to approve requirements, developers and testers invested in automated testing at all levels, and testers joined the developer daily stand-up in order to catch bugs before they were released “have you checked that page in Welsh?“, “We found problems on that page last time because IE was too slow“.

Number of releases per month.

On an agile project we noticed that releases were getting held up because testing was slow and produced a lot of rework, which were then tested against more features that had been pulled from the backlog, increasing testing time. Each release also took half a day, so we had to schedule them carefully.

So we set a goal of focusing on releases for a month and measuring how many we did. There were other measures within that, such as monitoring work in progress on the testers, time taken to do a release, and cycle time from start of development to live release, but they all drove the big number, visible to the whole company, of more quality releases.

Questions answered per day.

This can be a very useful metric for research projects, especially when following a fail fast approach, when you want to prove something can’t be done before you invest in doing it. In order to do that, you need lots of questions with quick answers, to accelerate learning. Any answer, positive or negative, is progress. It means we have learned something.

“I cheerily assured him that we had learned something.
For we had learned for a certainty that the thing couldn’t be done
that way, and that we would have to try some other way.” – Thomas Edison

Age of Pull Requests

Successful agile projects rely on peer review, either via pairing, or a formal review process such as git PRs (and I won’t discuss that in this post). However, when we started working with these on one project, we discovered that we were building up a large debt of Work In Progress because the team wasn’t incentivised to review each other’s code, so one of the developers set up a nag bot for any PR older than 4 days. It lasted 2 weeks before it was no longer needed.

What about you?

What metrics have you used to incentivise the team? What problem were you trying to solve, and did the metric work?

Advertisements

2 thoughts on “Measuring the wrong thing

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s