Skip to main content

After the pre-election polls spectacularly failed to predict a Conservative majority, many are seriously asking why we should pay them any attention now. Their failure has shaken the public’s trust in numbers appearing in the media. Now Lord Foulkes, who was already displeased with the polls following the Scottish referendum, has resurrected his plan for a state-backed polling regulator.

Various theories have been offered to explain what went wrong – from the 'shy Tories' who actually did believe in the #LongTermEconomicPlan to the Labour voters who just never made the effort to vote on polling day.

These fundamental questions were posed by a selection of MPs at an event in parliament organised by the Royal Statistical Society and the Parliamentary Office of Science and Technology. Daniel Finkelstein chaired the meeting of ten MPs along with John Curtice, David Spiegelhalter, David Firth, Will Jennings, Penny Young and Ben Page from pollster Ipsos MORI. However, just as the way polls are produced is complicated, so are the reasons why – and crucially, how – they failed.

What’s the use of polls?

This question was answered eloquently by John Curtice. He pointed out that politicians rely on them. In the Westminster bubble, polling goes some way to keeping politicians honest by outlining how the public will respond to their latest big idea or policy wheeze.

Also, someone is always going to commission opinion polls and these will inevitably have a political effect. The best evidence of this was in how the media and political narrative shifted during the general election when the polls recorded the remarkable rise of the Scottish National Party (SNP). This new threat to Westminster’s status quo saw both the Tory and Labour campaigns change course to respectively play up, and play down, what this would mean for the country.

This is where the complexity begins, because the pollsters broadly got the SNP’s success north of the border correct and also did well in predicting UKIP’s vote share. But this counts for little because the UK’s first-past-the-post electoral system means the Labour and Tory performance is paramount.

Why the pollsters failed to gauge Labour and Tory vote shares is the question that the British Polling Council’s inquiry will focus on. The reason this is difficult to answer is because, as John Curtice says, ‘we’ve been here before.’

In 1992, the election result produced a similar shock and the polling companies used the lessons learned to adjust their methodologies. So this year, the ‘shy Tory’ factor was dusted off as an explanation for the surprise. But the pollsters already adjust for this, in the same way they already adjust for the ‘lazy Labour’ voter.

David Spiegelhalter was also in attendance to offer his take on the problem. His new book looks to explore the sex lives of people through data. But similar to political polls, sex surveys are extremely hard to get accurate and representative responses to. Political voting intention and sexual habits may seem like subjects miles apart, but they are similar in the sense that they are finely balanced issues that can easily tip into embarrassment.

Amanda Milling MP echoed the need for deeper surveys. She came into politics from a background in market research and won the marginal seat of Cannock Chase in this election. In her opinion, the national polls were not reflecting her experience on the doorstep in the type of swing constituency that decides the make-up of UK governments.

Why did the exit poll get it right then?

The big shock came at around 10pm on election night when the exit poll was released. The headline figures said the Tories would be the largest party, while the margin of error suggested they could gain a majority.

The initial reaction of many was that it couldn’t be right, with Paddy Ashdown even saying he would ‘eat his hat’ if it was correct. But it was right, and Ashdown was subsequently presented with an edible hat to enjoy.

David Firth, a statistician who was instrumental in designing the exit poll, explained how their approach is very different to the polling companies. The exit poll looks at the changes in the data between elections. This has the crucial advantage over the pollsters of being able to use the 2010 election as a baseline, instead of asking those surveyed to remember (if they can) who they voted for last time. The regularity of voting is vital, because they find political views and party allegiances are fairly constant.

Opinion pollsters are finding it harder and harder to find people to respond to their surveys, too. For example, they have a lack of young people and an oversupply of the middle aged, so they try and adjust the sample accordingly. Conversely, the exit pollsters interviewing voters outside polling stations found that people were generally happy to cooperate with them.

The other advantage the exit pollsters have is their ability to present respondents with a replica polling card, complete with local candidate’s names, simulating the actual voting experience.

This data from around a hundred polling stations nationwide is then statistically modelled for the parties vote share, before a probability for each seat is worked out. Ultimately, this means a single exit poll with a total cost of £200,000, trounced the 200-plus cheaply produced opinion polls throughout the campaign.

The future of the political poll

While the media might be wary of talking about opinion polls for a while, that won’t last. The fact is that those hundreds of commissioned polls provided a constant flow of material for political pundits to fill column inches and airtime with.

The term ‘opinion poll’ also obscures the fact that any single poll is just a snapshot of possible voting intention as conducted on (insert day here), but that takes the drama out of it. The polls will make it back onto the front page eventually (the EU referendum is only round the corner), but the distant hope remains that a serious explanation of the uncertainty involved might also appear.

It’s now more than a month since polling day and there is still no clear reason why the pollsters got it wrong. The British Polling Council’s inquiry will take a lengthy look at the methods and the British Election Study could debunk or confirm some of the underlying theories in how the voters made up their minds.

Until they come to an evidence-based conclusion, everything else is just speculation subject to our own personal political bias.

 

The British Polling Council and the Market Research Society are holding an open meeting at the RSS on Friday June 19 to set out the terms of reference for their inquiry and to discuss the possible causes for the failure of the polls.

Leave a Reply

Significance Magazine