Sunday, July 28, 2013

Measuring What Matters


At a conference I attended, there was this obnoxious little man who needled the speakers every time they mentioned a metric.  When someone cited a change in their Net Promoter Score as evidence that an initiative had been successful, he'd ask them a flurry of pointed questions about what the metric meant in terms of the firm's financial results, flustering the speaker (who was unprepared to go off on this sidetrack) and annoying the audience (who patiently waited for the speaker to be able to continue on the topic they had come to hear about).   Later in the day, this heckler took the stage to deliver his own presentation, developing the topic he had wedged into previous presenters.  What he had to say on the matter was impressive, and well worth considering.

Primarily, there is a widespread problem of metrics being chosen at a whim.  Companies latch on to a fashionable metric such as the Net Promoter Score (NPS) or Customer Experience Index (CXI) and use it as a means to measure the performance of their initiatives without, as he had aggressively suggested, considering whether there is any correlation between the metric and any behavior that is remotely of value to a firm.

That is to say that a 10% improvement in the NPS or CXI does not automatically correlate to a similar increase in revenue or repeat visits.   Sometimes, you can effect an improvement in a score while the other goes down, or achieve improvements in both scores as your revenue diminishes, your reputation is damaged, and your customers' loyalty is strained.  As such, you really shouldn't take any metric for granted.

Firms really should consider the metrics in terms of meaningful outcomes - but it's obvious that few actually do.   Management will set a random goal tied to a random metric (such as a 10% improvement in NPS) and feel satisfied when they are able to achieve the mark they set for themselves regardless of its impact on the firm's performance.   Turns out we're fond of numbers even when they don't mean anything.  Five is better than six, regardless of what is being measured or whether there's any demonstrable benefit to being a six rather than a five.

The one possible use for industry-standard metrics is to compare your firm to others in your industry.   But even that is ultimately meaningless: a company that achieves a higher score in one regard may have worse performance than another firm whose score is lower.   Doing so also means that you take for granted that the industry is doing well - to be the customer service leader in the airline or used car industry is like being the healthiest of the terminally ill.

Ultimately, the work we do is about improving the customer experience, taking it on faith that it will result in improving the financial performance of our firms - and generally, this turns out to be true.   Having a vague sense that you're doing the right thing is likely better than having no sense at all - and better still in feeling a sense of accomplishment for doing something terrible.   It cannot be taken for granted that a general metric is an indicator of success at anything but making a number better.

And in this sense, what "works" for the industry may not work for a given firm.   Two competitors may find different metrics to be meaningful:   one may find that revenues increase in correlation to NPS, another may find that revenues increase in correlation to CXI, and a third may find a completely different metric to be meaningful.

I'm left with the distinct sense that I should ask some of the same noisome questions of my own colleagues, albeit more gently and in a more appropriate forum, to determine which of the metrics is most meaningful, and encourage the pursuit only of those that can be strongly correlated to outcomes, forsaking all others for what they are: scores that are no more meaningful than a magazine quiz.

No comments:

Post a Comment