There have been clear losers in the UK General Election yesterday. People who should go back and do a good soul searching, critically think about what went wrong and acknowledge fiasco at great scale. I am talking of course about the owners/leaders of Opinion Polls companies. Their ‘neck to neck’ between the two major parties, Conservative and Labour did translate into an absolute majority of the Conservative and a real disaster for the rest.
Did people change their minds? Were they lying in the opinion polls? Were the samples well constructed? The weather? The position of the moon?
The trouble with numbers is that they are fascinating, they have the same effect as light in front of rabbits’ eyes. It is much harder to judge intentions, fears, love and hate, or rejection of a A that becomes translated into an adoption of B.
The poll numbers were wrong. The new ballot numbers are overwhelming.
In organizations, we are much better at ‘managing by numbers’ than managing and translating into numbers. We are much better at launching Employee Engagement surveys that translate into numbers than understating what is behind the numbers and predicting employee behaviours, collectively and individually. Somebody unsatisfied with the company’s work-life balance is a number. This person may be unsatisfied with the company, full stop, and then ascribe dissatisfaction across the company, extending an halo effect to its work-life balance, which, perhaps, in itself, could be even pretty good.
Leaders need to spend 90% of the time in understanding the why and 10% of understanding the numbers.
After all, leading towards the future needs a fair dose of prediction. Predictions need a why. Mastering the ability to predict means going deeper into the understating of causes and effects, a harder task than saying yes, no, or wow! to a set of numbers
Predicting is also imagining scenarios, imagining worlds, making sense of the past and present and projecting in space ant time. This should be a,b.,c for leaders but, as the owners-leaders of all UK polling companies have now understood, it may take a fair new dose of behavioural sciences to polish the trade.
The numbers in the spread sheet are amoral numbers in a cell, until you bring behavioural sciences in and you start making sense.
In trying to diagnose why the polls were so wrong in the recent UK election, there are many sociological questions to think about — for example, in this case, were there a lot of people who who were embarrassed to tell the pollster which party they really supported? I think the social and political scientists will have a lot of fun with this (if they are not all sacked in the next wave of austerity).
I would suggest that there are also two mostly-technical issues that analysts should keep an eye on. One is that there is a fundamental difficulty in designing a good voting system for elections with more than two significant candidates. The economist Kenneth Arrow, back in the 1950s, proved that there is no perfect voting system in such situations — one that can never lead to an anomalous choice. There has been a lot of more recent work on voting systems. In an N-way election, a system that allows people to vote for just a single candidate, with “first past the post” counting, is one of the worst choices you can make.
This is not just a theoretical concern. In the 2000 U.S. election, the very liberal candidate Ralph Nader siphoned off a small number of votes from liberal Al Gore in Florida, and in so doing handed the election to conservative George W. Bush. History would have been very different if Nader, seeing that he was not going to win, had withdrawn in the last week. Or if the U.S. had adopted an “instant runoff” system for voting in each district.
With the “first-past-the-post” system, voters much vote strategically: I prefer candidate A, but I have to vote for B to prevent C from being chosen. Voters are not good at making these choices, and often the choice depends on what the polls are saying as the election nears. So the published polls themselves cause people to change their votes at the last minute. In a parliamentary system, the situation is even more complex, and more confusing for voters and for pollsters. It would be interesting to do a post-election survey to see if voters wish that they (and people with similar views) had voted differently now that they see the outcomes.
Second, there is a science to designing a good poll, but the people who actually produce the polls do not do a very good job technically — and in many cases they don’t want to, because the real purpose of the poll is to subtly advance one position over another in the guise of objective information gathering. How many times have you participated in a poll in which the answer you really want to give is “doesn’t apply in my case” or “none of the above” or “it depends” or “I like option B, but the devil is in the details”? But the online form or the person on the phone can’t accept these answers: the poll has to be tabulated mechanically to produce tables of simple numbers simple graphs, and nuanced answers are not acceptable. So, garbage in, garbage out.
So there’s more going on here than deliberately deceptive answers to pollsters or a mysterious last-minute shift in public opinion.