The NPS® question is asked on a 0 thru 10 scale. The so-called Net Promoter Score® is then calculated at an aggregate level (i.e. for the whole sample) as the difference between the percentage of respondents giving 9 or 10 as their answer less the percentage giving 0 thru 6 as their answer.
I make no comment as to the suitability or otherwise of such a metric – there’s plenty of discussion on the net which deals with that, plus two papers about to be presented on the topic at the forthcoming AMSRS 2017 Conference ...
But what many do not realise is that Net Promoter Score® is in effect a simple 3-point scale, which would on the surface not seem to have quite the same cachet as the 0 thru 10 scale that’s administered in surveys.
Why do I say that? Because mathematically the Net Promoter Score® is equivalent to each respondent giving -100 (i.e 0 thru 6), 0 (i.e. 7 or 8) or +100 (i.e. 9 or 10) as their answer. And that’s not roughly equivalent, it’s precisely equivalent. If you calculate the average score across your sample using those values, you get exactly the same result as subtracting the overall percentage giving 0 thru 6 from the overall percentage giving 9 or 10.
So how do we feel about that, i.e a very lumpy 3-point scale? Does it seem reasonable to measure customers’ feelings about their product/service supplier in that way?
Surely we could just as easily ask them to give a score of -100, 0 or +100 with -100 = ‘wouldn’t recommend highly’, +100 = ‘would recommend highly’, and 0 = ‘might or might not recommend highly’, or somesuch.
Or we could ask them similarly to give a score of -1, 0 or +1 and rescale the result by a factor of 100.
I honestly don’t know how I feel about this. I am frequently asked to analyse data to produce NPS® ratings, and I will no doubt be continuing to do this into the future. But I think it behoves all of us to give some serious consideration to the advisability, or otherwise, of basing so much on what appears to be so little.