Attributed to Peter Drucker, we have inherited the ‘you can’t manage what you can’t measure’. Drucker meant that you need to know if you are successful so you need to have a way of tracking, of knowing. But out of context, the phrase has become a management fundamentalist mantra, a proxy for numbers.
People who feel that to be a good manager they must measure at any cost, will look around for anything that looks, feels or smells like measurable, and I will forget the small detail of the relevance of the measurement.
This has become an organizational pathology in its own right. In its developed form of disease, the praxis of management becomes defined by the units of whatever seems like a measurement to which I can associate a cost.
A leadership programme becomes 45 workshops for 50 people each, during 6 months, with 5 trainers and this much of a budget.
A strategy implementation by a Big Consulting Firm becomes 50 (only 50?) consultants, 2 principals, 1 partner, 2 years and this much of a budget.
A deployment of values becomes 3 communication programmes in 6 business units, with 10 workshops each, 6 Town Hall meetings and 5 coaches and this much of a budget.
Here, the activity is the programme (and its busy-ness its proof).
The reality is that you can measure anything, the relevant and the irrelevant, the key factors and the rubbish, the sense of progress and the sense of busy-ness.
At the core of the confusion that many people don’t want to hear is that measurement does not necessarily equals numbers. Many moons ago I had the privilege to learn Decision Analysis (DA) from Larry Philips at the LSE in London on a memorable personal tutorial basis. The Multi-attribute DA preached in the LSE at the time, played with ‘preferences’. Silly and obvious as it may sound many years later, it was a revelation for me to know that when we were saying ‘if we always prefer A to B, and always prefer B to C, then we always prefer A to C’, we were measuring! Etc. A discussion for another day.
The obsession with what is obviously measurable in front of your eyes leads inevitably to blindness to what is not in front of your eyes, not obviously measurable and potentially the real key thing to track.
In reality, ‘measurement’ is never a single set of hard data. It’s the combination of many points of insights. The great thing about Multi-Attribute Decision Analysis is that it beautifully manages to merge soft and hard data in the same pot. For example, in the area of ‘cost’, money is only one parameter. Others are, for example, pain of execution, distraction from objectives and, say, probability of p***ing people off. And once you get used to play with the trade offs (e.g I am willing to trade half of my cost for one quarter of my pain), you are in different territory.
Would you like to comment?