The General Election losers: Labour, the Lib Dems… and the pollsters

How did it all go so wrong?

Following the shock exit polls last Thursday night and the subsequent unexpected Conservative victory, the British Polling Council has announced its intention to launch an inquiry into ‘inaccurate’ election polling.

Pollsters have admitted that there appears to have been a systematic overestimation of Labour’s vote share by the majority of pollsters and a parallel underestimation of the Tories’ share. Last time there was such a significant error in the polls was 1992 when the Tories also had an unexpected victory, leading the industry to conduct a comprehensive review resulting in a number of changes to polling methodology. So what went wrong this time?

There are a number of factors that could have led to the discrepancy in the UK polls, including the increasing complexity of the UK political system, the ‘Shy Tory’ phenomenon, and the volume of undecided voters, with up to 23% of voters still undecided with only a couple of days to go making decisive predictions more difficult. However, it is important to note that polls are not forecasts, but samples. There is a distinction between voting intention polls (as carried out by a number of polling companies) and the exit poll (of which there is only one official poll conducted by NOP/MORI for BBC/Sky/ITV News). Voting intention polls do not predict seat numbers but provide an insight into how the electorate is intending to vote. In contrast, exit polls (which are far more comprehensive and based on a sample of 22,000) provide predictions on the number of seats won by each party based on a poll of voters leaving the polling station and a calculation of probability. As Andrew Hawkins, Chairman of ComRes, has stated: “We do indeed, together with academics and the media, need to look at how that vote share translates into House of Commons seats – that is certainly true.  But there is no need to throw the baby out with the bathwater.  Most of the polls from most of the pollsters were within the margin of error.” Indeed, Survation produced a poll showing the percentage share of the vote for each party that was very close to the final result on the morning of the election, but thought they seemed so “out of line” with other polling that they didn’t publish it.

As Nate Silver, an America statistician and founder of FiveThirtyEight, has suggested, this problem may not be limited to the UK General Election, with inaccurate polls playing a part in the US midterm elections, Scottish referendum, and Israeli election. Silver suggests this may be down to the increasing challenge facing pollsters in reaching a representative sample of voters, requiring them to account for a greater margin of error for which they may have overcompensated.

Pollsters are increasingly struggling to reach people on landlines and, resorting to mobiles and online polling, which are considered to be less reliable. Pollsters will certainly have to review their methods ahead of future elections, and whatever their response, journalists are likely to report the polls with greater scepticism in future. Expect a return to more face-to-face polling, and more caveats over predictions. This presents an opportunity for more air time to be spent on matters of policy or political debate, and presents the political parties with a challenge: to come up with more compelling campaign content to fill the gap.