Skip to main content

Some readers may find it ironic that, less than two months after the UK voted to leave the European Union, our August 2016 cover story asks whether the human brain adheres to the rules of Bayesian inference. As discussed in this article, Bayesian inference is seen as “ideally rational in the way it integrates different sources of information to arrive at an output”. But many would regard the decision of the British electorate as anything but rational.

After all, the weight of evidence – and expertise – seemed to be against Brexit. Amid the warnings that a vote to leave would lead to gloomy economic prospects, one might argue that a rational mind surely would have absorbed this information, updated its prior beliefs about the risks of leaving the EU, and ultimately voted to remain.

That line of thinking assumes that human judgements and decision-making processes do stick closely to Bayesian principles. But there is plenty of evidence to suggest they do not. Or perhaps, in this particular case, prior beliefs were so weighted against the EU that an almost impossible amount of evidence would have been required to convince voters to change their minds. For more on these sorts of arguments – both for and against the Bayesian brain – read our cover story (for free, until the end of this month).

Though 16 million people might disagree with the outcome of the Brexit vote, there is no suggestion that the vote itself was anything but free and fair. This stands in contrast to Russia, where accusations of vote-rigging and fraud have been made in several recent elections. With voters heading to the polls again in September, researchers Dmitry Kobak, Sergey Shpilkin and Maxim S. Pshenichnikov investigate and explain several anomalies in the Russian election data set.

There is, of course, another big election coming up – the US presidential election in November. We are on the lookout for interesting data-based stories and statistical perspectives on the campaign, the vote, and the aftermath, so please pitch your ideas to significance@rss.org.uk.

Elsewhere in this issue, we have a statistical detective story in which two climate researchers reopen a cold case and investigate the death of common guillemots in the Barents Sea during the winter of 1986–7. William P. Skorupski and Howard Wainer discuss the evidentiary support for changes that were made to breast cancer screening recommendations. And, ahead of the start of the new NFL season, Harvard statisticians Harrison W. Chase and Mark E. Glickman build a model to predict whether a team is likely to sack its coach.

 

Leave a Reply