Skip to main content

There are many opportunities for us to make wrong decisions in life – whether it be drawing incorrect conclusions from biased data, pursuing a course of action based on flawed assumptions or failing to spot errors in the information available to us. British pollsters are thinking hard about the mistakes they might have made in trying to predict the outcome of the 2015 UK general election. 

Of all the voting intention surveys conducted in the run-up to polling day, not one foresaw that the Conservative Party would return to power with a small overall majority. 

And it is fair to say that we at Significance did not expect them to get it so wrong. For the past few months, Timothy Martyn Hill has been painstakingly collecting and analysing a huge number of election predictions dating as far back as 2010. The result of all his work can be found in our June issue, out this week. What we thought would be a story of how disparate sources of prediction eventually converged on a reliable forecast is instead now a story of how wrong the polls, modellers and betting markets have been over the life of the last parliament.

It is likely that any mistakes in the opinion polling process can be identified and rectified in time for the next election, as they were in 1992 – the last time the polls experienced such a 'debacle'. But as David Spiegelhalter noted in a recent blog post: 'The miserable performance in the 2015 election should not be forgotten.'

He’s right, of course. All of us – pollsters included – should remember and learn from our mistakes. But if we accept that mistakes are an inevitable part of life, we can at least take some comfort from the knowledge that a statistical mindset can help to minimise the errors we make.

With that in mind, our June issue features three articles on the 'reproducibility crisis in science' – that is, the failure to reproduce or replicate scientific findings – and how statistics can help resolve this particular issue. Firstly, Roger Peng makes the case that investment in statistics education is vital to avoid costly mistakes in the design and analysis of research. Andrew Gelman follows up with some support for psychologists, who have long wrestled with the problem of replicability. And, closing out this section, Pfizer statisticians Katrina Gore and Phil Stanley describe a new tool that aims to help preclinical scientists improve the statistical rigour of their experiments.

Finally, I would like to add a note of thanks to all those who entered our Young Statisticians Writing Competition. The winning article will be published in our October edition. If you forgot to enter, but still have the urge to write, our friends at the Institute of Mathematics have a similar competition under way. You can find details here.

 

Leave a Reply