Brian Christian and Tom Griffiths are the authors of ‘Algorithms To Live By: The Computer Science of Human Decisions’ (2016), which should be compulsory reading for leaders interested in Critical Thinking and decision making. OK, the ones who are not, please switch off the lights on your way out.
They have created the term (last chapter of the book, rabbit out of the hat) of ‘computational kindness’ which is a cheeky concept to explain how limiting our choices may be very good for us. Paraphrasing them a bit, they put forward the example of friends deciding where to go for dinner.
We could go to a Chinese or maybe Indian. Mind you, there is a new Italian in town, or perhaps our old grill; or the Greek on the corner. Response: I don’t mind, really.
(Eventually they land in the Italian just to discover that (a) it’s not that good and (b) none of them really wanted to go to the Italian in the first place)
Both parts have been computationally very unkind to each other. They force the other to have to compute too many possibilities. In an attempt to be kind and provide many options, one is given too much homework to the other mind. The ‘kind response’ from the other – I don’t mind- is kind only conventionally speaking, but not to the friend’s mind. That mind says, here we go again, do I need to decide?
The alternative: Indian or Greek? 7:30?
That is being computationally very, very kind!
This reframing of choices has been tried many times in the context of Behavioural Economics ‘experiments’ where data shows that you are likely to have a much better response if proposing a meeting ‘Thursday or Friday at 9;30’ than ‘any day of that week, I don’t mind, I’ll be flexible’. It seems to have little to do with real availability and more with our lazy minds making a decision. Or not.
The implications for decision making are important. Less options and choices, or at least a not-so-open set of possibilities, may prove to be a more effective way to go about.
Agree or agree?