Skip to main content

In the US presidential election, the final poll of polls compiled by Real Clear Politics predicted that Hillary Clinton would win 46.8% of the popular vote and Donald Trump 43.6%. In the end, Clinton won 47.7% and her rival won 47.5%. This small majority she had in the popular vote was reversed in the electoral college and she won 228 delegates to Trump’s 279 (figures exclude New Hampshire, Arizona and Michigan). So the last-minute polls were accurate in predicting Clinton’s vote but were off by 4% in the case of the Trump vote. What went wrong?

The failure of the last-minute polls in Britain to accurately call the general election of 2015 and again in the Brexit referendum in June 2016, provides some insights for the US presidential election. The 2015 failure prompted the British Polling Council to set up an inquiry into what went wrong. Four possible explanations outlined in their preliminary report, published in January 2016, of what went wrong in Britain may be relevant to the US presidential election: a late swing, sampling problems, herding behaviour and mis- or “over-reporting”.

Late swing
Late swing refers to the possibility that some voters opted for Trump rather than Clinton at the last minute, but that this was not captured in polls which took place in the field before it happened.

Two of the three polls published on the election day put Trump in the lead. The LA Times/USC tracking poll gave him a lead of 3%, and the IBD/TIPP tracking poll gave him a lead of 2%. In contrast, in the 21 polls published on the day before the election, Clinton had an average lead of just over 3%, calculated from the data on Real Clear Politics. This suggests that there may have been a last-minute swing to Trump.

Sampling problems
Sampling issues relate to whether or not the surveys were actually representative of the wider electorate. The issue of sampling bias is complicated and it comes down to the difference between what are called “random samples” and “quota samples”. Most internet surveys use a form of quota sampling in which polling agencies try to replicate the characteristics of the US electorate by including certain numbers of blacks, women, young people and so on. This approach can fail to include hard-to-reach groups such as older people not connected to the internet or those living in rural areas.

Random samples pre-select individuals using probability theory and so are more likely to be accurate, since everyone in the electorate has a chance, albeit very small, of being chosen for interview. But random sample surveys cost time and money, so they are not a feasible method for conducting last-minute polls.

Telephone polls do use a system of random digit dialling done by computer to identify potential respondents. Since this is a random sampling method it should, on the face of it, be more accurate than quota samples. The problem is that pollsters have to call many people before they can get someone willing to talk to them. Response rates can fall below 10%, invalidating the advantage of this method because those willing to talk are not representative of Americans in general. Overall, it is possible the final polls may have excluded Trump supporters if many of them were in hard-to-reach groups.

Herding behaviour
Herding behaviour occurs when a survey agency appears to be out of line with its competitors and so it readjusts its weighting schemes in order to bring its results back into line.

Since the great majority of the polls before the election gave Clinton a lead, a possible “group think” might have occurred with pollsters adjusting their results in line with what appeared to be the norm. That said, there were a number of outliers such as the LA Times/USC polls which regularly put Trump ahead.

All pollsters use weighting schemes to compensate for biases in the sampling and these vary between agencies. It’s possible to analyse the raw data collected by different agencies to detect herding, but this takes time to do this and some agencies will not release their raw data for analysis. So while we are not sure if herding did occur between pollsters, it may well have been a problem.

Misreporting
Finally, there is misreporting or “over-reporting”: respondents telling the interviewers one thing and then doing another on election day. This phenomenon has long been recognised and occurs for a number of reasons.

First, there is what’s called a “social desirability bias” causing respondents to lie about their voting turnout because they want to appear to be good citizens in the eyes of the interviewer. Second, there is an argument known as the “spiral of silence” which suggests that voters will mislead interviewers about the party they support if that party is unpopular at the time. In Britain, this idea gave rise to the concept of “shy Tories” in the 1992 general election, in which the polls underestimated support for the Conservatives in pre-election surveys.

In the presidential election campaign, many people were vociferous supporters of Trump and quite willing to tell the pollsters, so it would be strange to describe them as “shy Trumpers”. But The New York Times exit poll revealed that some 29% of Latinos supported Trump, despite the fact that he had said some fairly disparaging things about them in the campaign. There could very well have been “shy Trumpers” in this group and, if so, this would have underestimated his support.

There is evidence in Britain to suggest that people are more likely to lie to pollsters than in the past. This may very well be happening in the US too, simply as a consequence of so many surveys being conducted by market research companies and on the internet. If individuals are “surveyed out” then assuming that they do participate, they might well whip through the answers quickly in order to get it over with. These unmotivated respondents may well be more likely to lie than the rest.

The fact that the polls have had a bad run in forecasting elections in Britain and the US in recent years does not of course mean that we should abandon polling altogether, or ban polls from being published prior to elections. Rather we need to improve the methodology and try to understand the factors which are causing problems more clearly. The only alternative to polling for finding out what the public thinks is anecdotes and hunches from commentators – and they are very unlikely to improve things.

  • Paul Whiteley is professor in the Department of Government, University of Essex.
  • This article was originally published on The Conversation. Read the original article.

The Conversation

Leave a Reply