Skip to main content

These headlines offer a snapshot of the responses to the findings of the ‘Inquiry into the 2015 British general election opinion polls’, an inquiry set up by the British Polling Council and the Market Research Society soon after it became clear that the polls had called the election wrong; that the expected “statistical dead heat” in vote share between the Conservative Party and the Labour Party had failed to materialise and that, in fact, the Conservatives had won a majority in the House of Commons with a seven point lead over Labour.

The inquiry’s central conclusion – that unrepresentative samples were the primary cause of the polling miss – was revealed back in January. A statement issued at the time explained how “methods of sample recruitment used by the polling organisations resulted in systematic over-representation of Labour voters and under-representation of Conservative voters”, and that “statistical adjustment procedures applied by polling organisations were not effective in mitigating these errors”. Or, as The Sun put it, “General election pollsters got result so badly wrong because they did not ask enough Tories”. 

The full report of the inquiry, published last week, makes recommendations for improving current practice, but is cautious to note that its suggested improvements will only reduce the risk of future polling misses, rather than removing the risk altogether. One issue has to do with the continuing use by pollsters of non-random sampling, rather than 'gold standard' random sampling methods.

Random sampling – or probability sampling – is more expensive and more time-consuming to implement, but it has a clear advantage over non-random – or non-probability – sampling. As respondents are selected randomly, all members of the population have a known chance of being selected to participate in the survey. Though this doesn't in itself guarantee that the resulting sample is fully representative of the population, the random nature of the selection means pollsters can use sampling theory to adjust for over and under-represented groups. It also reduces the risk that the respondent recruitment process is in some way biased because the potential for self-selection into the sample is lower. For example, people who elect to join an online panel to take part in regular surveys are likely to be rather different from the general population in terms of their level of political engagement, likelihood of voting in elections, and socio-economic status.

Within the non-random sampling framework, the inquiry recommends that pollsters “take measures to obtain more representative samples within the weighting cells they employ”. It also recommends that the Economic and Social Research Council (ESRC) fund a pre- and post-election random probability survey. “There is, of course, no guarantee that a random probability survey would get the vote intention estimate correct,” says the report. “But it would reduce the risk of being wrong and, moreover, would represent a very useful means for non-random polls to benchmark their estimates, not only in terms of headline vote intention but also to a range of other measured variables, some of which might be used in setting quota and weighted targets.”

The limitations
Some readers of the inquiry report might despair at the notion, expressed above, that even a 'gold standard' approach to surveying voters could still result in the wrong answer. Indeed, the inquiry notes that the 2015 British Election Study, which was “carried out over a period of several months after the election” and “used the highest quality methods at all stages”, still managed to over-estimate the Conservative share of the vote, while under-estimating that achieved by the UK Independence Party.

But if despair ultimately gives way to pragmatism, and an appreciation for the challenges and uncertainties in polling, the inquiry may well feel that it has served its purpose. It writes that “a desirable legacy for this report is that it might effect a more realistic appraisal among stakeholders of the limits of sample-based research to predict the future behaviour of complex, dynamic and reflexive populations”.

On the challenges specifically, inquiry chair Patrick Sturgis made the point during last year's Cathie Marsh lecture that pollsters face an uphill struggle in having to ask a poorly-defined population (likely voters) about behaviour that may take place in the future. As such, he said, it was more surprising to find that polls get anywhere close to the actual result, not that they are – on occasions – wildly off the mark.

That said, the inquiry is clear that "there are improvements that can and should be made to how polling is currency practised in the UK". In addition, it says that alongside methodological changes “must come greater transparency about how polls are conducted and clearer communication of the likely levels of uncertainty in their estimates”.

The final two recommendations of the inquiry are for British Polling Council (BPC) members to “provide confidence (or credible) intervals for each separately listed party in their headline share of the vote”, and to “provide statistical significance tests for changes in vote share for all listed parties compared to their last published poll”.

It notes that: “Commentators are prone to over-interpreting small changes in party shares between opinion polls, sometimes giving the public the impression that party fortunes are shifting when the evidence does not support the inference. A requirement to test for change since the last published poll does not preclude discussion and analysis of whether changes from a larger set of polls, taken together, might constitute evidence of a change. Responsible media commentators would be much less inclined, however, to report a change in party support on the basis of one poll which shows no evidence of statistically significant change.”

In response to these two specific recommendations, the BPC says it will look to develop industry-wide methods and approaches to calculating confidence limits and statistical significance, and that rules requiring publication of these calculations will be introduced once the necessary work is complete. In the meantime, the BPC and Market Research Society promise a host of other work, including the updating of guidance (produced jointly with the Royal Statistical Society) on the use of statistics in communication, and the production of a guide for the public on how to read polls.

While none of this directly addresses the main failings of the 2015 general election polls, and transparency in itself won’t prevent future mishaps, helping the public, politicians and the media to get a better handle on the limitations of polling, the sources of potential error, and the uncertainty in their reported estimates may lead to more critical and considered assessments of what future election polls can and can’t tell us about voting intentions, party shares and – perhaps most importantly – who will be the next Prime Minister.

  • Meanwhile, over in the US, Rob Santos, vice-president of the American Statistical Association, writes that "embarrassing polling flubs seem increasingly common". In a Los Angeles Times op-ed, Santos explains how these blunders are "the downstream consequences of large-scale social and technological changes, which affect how the public consumes polls and how pollsters conduct them".

 

Leave a Reply

Significance Magazine