Skip to main content

Election season is now well underway in the UK and every scandal, botched interview and policy announcement is being elevated to profound status. These triumphs or disasters are then inevitably used to give narrative to the parties’ constant movements in the political polls. But are the pollsters really giving us an accurate snapshot of the national mood?

 

The main pollsters are all members of the British Polling Council (BPC), set up in 2004 to promote transparency in polling methodology and improve standards. Every new poll now comes with reams of data tables and this is where things get interesting.

 

These tables hold information about what questions were asked, the raw (unweighted) number of responses, the weighted responses and the methodology used for that particular poll. While it would be nice to be able to say some pollsters use better methodology than others, the problem is made more complex by the fact that they use different methods of data collection depending on the funding and scope of the poll.

The biggest single issue with producing accurate polls is the sampling frame used. If the people asked only represent a subset of the overall target population – biased results will follow. So we set out to see how the pollsters’ methods differ.

YouGov (pollster for The Sun and The Sunday Times)

YouGov has recruited a huge online panel of 360,000 adults to take its surveys, but this group is to some extent self-selecting, and not representative of the entire population of voters. When it launches a new poll, YouGov invites a much smaller sub-sample of the panel to take part, and this group is chosen to be representative of people based on age, gender, social class and newspaper readership. 

Afterwards the responses are checked against various other surveys and re-weighted if necessary. Respondents are encouraged to take part by a small cash incentive – this is designed to increase the response rates and make sure that not all participants are political animals. A possible disadvantage is that it might increase the response rate differently among different groups. For example, if someone doesn’t respond because they’re busy it might not make any difference if you offer them money. 

Populus

Populus also uses online polling with similar methodology to YouGov and a panel of around 100,000 people, but the outcome appears to be systematically different. During January, Populus was consistently recording around a 2%-3% Labour lead whereas YouGov suggested a dead heat. This is presumably down to a combination of the different make-up of their panels and the weighting they use. This illustrates the limits of all polling – there isn’t much point in getting a massive sample if it won’t be representative.

Like most pollsters, Populus and YouGov both prompt voters with a list of parties which includes UKIP and the SNP or Plaid Cymru (in Scotland and Wales respectively), but not the Green Party. This may cause a slight reduction in the proportion of people who say they vote Green, but it is unlikely to be dramatic.

TNS

TNS also use an online panel of 150,000 people, with a small cash incentive for taking part. It’s noticeable that the re-weighting they do for age is much more dramatic than either Populus or YouGov, as their samples don’t have enough over 65’s. Re-weighting reduces the effective sample size of a poll: the stronger the re-weighting has to be, the stronger this effect is. TNS also uses a smaller sample size than YouGov and Populus with around 1,000 per poll rather than 1,500-2,000. This combination has led to relatively volatile results in TNS polls. They have shown Labour leads of between 0 and +7 since January.

Ipsos-MORI (pollster for the Evening Standard)

Ipsos-MORI use random digit dialling, which was the gold standard method of polling until only a few years ago. The problems with this are obvious, as fewer and fewer households have landline telephones (at least that they use regularly). Additionally, people don’t like cold calls, so the response rate to such surveys is often as low as one in five. If people who respond to the survey and have landlines are systematically different from everyone else, the pollster needs to detect this and adjust. 

To compensate Ipsos-MORI now uses mobile numbers as one fifth of its samples. This creates the further complication that some people have multiple mobiles, and so are more likely to be sampled. The pollsters work on the basis that any correlation between number of phones and voting patterns is stripped out when the sample is re-weighted according to demographic factors (e.g. choice of newspaper). Telephone polls are much more expensive than online panels – so far fewer of these are conducted.

Survation (pollster for The Mirror)

Survation uses online, telephone and face-to-face interviews. The online interviews are based on opt-in panels, creating an obvious source of selection bias. The telephone interviews are based on the BT directory of residential numbers – using these to seed numbers so that digits are randomly altered to cover unlisted numbers. However, these are generally only landline numbers.

Survation’s standard national poll uses online panels. When the geographic area being covered is smaller (NUTS2 regions1), they typically use telephone interviews. National polls are weighted by age, sex, region, income, education (with targets derived from 2011 census data) and, in the case of political polls, on the vote in the last relevant election.

Constituency polling is not weighted by previous voting behaviour as they are affected by imperfect recall. This is down to people declining to say how they voted previously and also voters moving between constituencies. Survation have used these reasons to argue against the use of voting behaviour weighting for constituency polling. But the problem of imperfect recall and declining to answer this question must also cause issues with weighting at a national level – as approximately the same numbers of people are polled in each case.

ComRes (pollster for The Independent, Independent on Sunday, The Daily Mail and ITV News)

ComRes uses either telephone or online interviewing and runs both methods concurrently. As such it will be interesting to evaluate how these methods differ in their national level responses which should provide more accurate information about the nature of recruitment bias for these approaches.

When using telephone interviewing, ComRes sources numbers from the BT database and the final digit is randomised to ensure that listed and unlisted numbers are sampled. The data is then weighted to be demographically representative of all adults. ComRes uses the National Readership Survey, which is an annual random probability survey of 34,000 face-to-face interviews, to inform the weightings used. They also weight by past vote recall. Based on the last available data on an opinion poll for the Daily Mail carried out on February 20-23, the polling method used struggles to capture voters in the age range 18-24.

Using an online panel, it has also looked at 40 of the most marginal Labour-Conservative seats (for ITV News) where each constituency is represented in the sample equally, with results weighted to be representative of all adults in all 40 constituencies as a whole.

Opinium (pollster for The Observer)

Opinium produces fortnightly opinion polls for The Observer using an online panel. Having pre-collected registration data, a sample of approximately 1,400 is taken on the basis of this registration data to participate in that particular poll. Their demographic weightings are based on published ONS data.

They also produce full data tables detailing their results (both weighted and unweighted). In the last opinion poll published, they were over-represented in the age ranges 25-34 and 35-54, but under-represented in 18-24 and 55+ age ranges. The problem of the under-representation of older voters in the online sample is also faced by other polling organisations such as TNS, YouGov and Populus.

ICM (pollster for The Guardian)

ICM use telephone interviews with the associated problems noted earlier. In their data tables they explicitly note that they sample both landline and mobile telephone numbers, which helps to minimise the bias of not sampling those without a landline. Targets for weightings are generated using the same source as ComRes (the annual National Readership Survey).

Lord Ashcroft

Lord Ashcroft commissions a series of telephone polls from other pollsters at both national and constituency level. Ashcroft states that all of the polling companies he uses are members of the BPC, but does not state which company undertook the fieldwork in individual polls.

Unlike the other polls discussed here, Ashcroft’s information on sample composition and recruitment is not provided in the data tables, rather it is provided only in the summary documents. In the national opinion poll (conducted between 20 and 22 February) 1,000 adults were interviewed by telephone, half on landlines and the remaining half on mobile phones. Results have been weighted to be representative of all adults in Britain (but the source of this has not been provided – e.g. ONS or National Readership Survey?) Results are then weighted by recalled past vote at the last general election and stated likelihood to turn out at the next. 

Ashcroft’s constituency level polls (here is an example) report very basic information about when the polls were undertaken and how many were polled in each constituency. But they don’t provide any information on whether the telephone numbers used were all landlines, or were both landlines and mobiles (which are more difficult to target geographically).

What to consider the next time you see a surprising poll

Overall, all of the above polls report their full data tables, both weighted and unweighted. However, the process of how they arrive at their weighting schemes could be explained better. The problem of voting recall causes concern as there is a danger of too much emphasis being placed on previous voting behaviour. On a constituency level, the weightings are particularly problematic as there is much more movement of voters between constituencies than there is on a national level. So information about how current and reliable the information used to derive the weightings on a constituency basis, is particularly important.

The major concern with telephone interviews is their omission of those with no landline telephones, fundamentally biasing the sampling frame. The online survey panels are self-selecting, with the associated problems of selection bias. Furthermore, the polling organisations do not report how they recruit or advertise for online panellists within the results of each poll. Trying to correct for under sampling of certain demographics due to problems with the sampling frame can lead to problems with relatively high weights being given to some respondents.

Many of the polls are reporting the national margin between Labour and the Conservatives to be within the margin-of-error, amplifying the consequences of biased samples. On the evidence of 2010 election, the national polls appear to be fundamentally sound on the basis of vote share given to Labour and the Conservatives. However, determining how this vote share is split across different constituencies proved beyond the pollsters at the last election.

The Liberal Democrats managed just 8.8% of seats with a vote share of just over 23% at the last general election – so individual constituencies matter. This time around, the effect of the smaller parties changing voter behaviour at a local level is still unknown. Constituency polls could be providing unreliable information to those looking to vote tactically, and this needs be taken seriously by those commissioning these polls.

These caveats need to be appreciated by everyone keeping an eye on the election in the coming weeks. Inevitability, between now and May 7, pundits and politicians are going to resemble a group of cats on a hot tin roof with the release of every new poll. Just wait until the leaders TV debates begin.

 

Footnotes

  • 1. NUTS 1 regions are EU Parliament constituency, NUTS 3 regions are approximately county council areas and NUTS 2 regions are in between the two in terms of geographic coverage.
  • 2. Northern Ireland is generally excluded from the pollsters’ surveys.

Leave a Reply

Significance Magazine