Skip to main content
Hundreds of people have been killed since the start of the year as a result of earthquakes – including those who died this week following a 6.2-magnitude quake in Italy. With all the data we have on these natural disasters, why can we not reliably predict their occurrence?


An earthquake-damaged monument in Kathmandu, Nepal. Photo credit: US Geological Survey

Robert Matthews of Aston University writes: It’s certainly not because of lack of data or effort: attempts to predict earthquakes go back centuries. The problem lies in extracting from this data the tell-tale signs – “precursors” – that reliably foretell when, where and how strong an earthquake will be.

Earthquakes are triggered when two slabs of rock meeting at a fault-line can no longer resist the forces acting on them, and slip.

Much effort has been devoted to identifying symptoms of the conditions likely to trigger earthquakes. Over the centuries everything from changes in groundwater to seepages of naturally occurring radioactive gas have been put forward as potential precursors. To date, only one has been found to be reliable: the quake itself. The rupturing rock emits so-called primary waves (P-waves), which travel faster than the more destructive secondary waves (S-waves). The emergence of P-waves thus foretells of impending disaster – albeit only by seconds. Even so, this has long been used to protect Japan’s bullet train, by cutting power and reducing the risk of derailment.

No reliable longer-term precursor has ever been found. That does not rule out such a possibility. However, the theory underpinning any putative prediction method does not give grounds for optimism.

In essence, prediction of an earthquake is like a medical diagnosis: it should give insights reliable enough to take appropriate action. Like a medical test, it may occasionally “cry wolf” or miss genuine events, so minimising false positive and false negative rates is clearly important if people are to trust the prediction enough to, say, order an evacuation. But the very nature of earthquakes makes it unlikely that any prediction system will be reliable enough to overcome the so-called base rate problem.

This is notorious for its impact on diagnoses of medical conditions, and has its origins in Bayes’ theorem of conditional probability. This shows that unless the underlying prevalence – the “base rate” – of the condition exceeds the false positive rate of the diagnostic test, even a positive result can more likely prove wrong than right. So, for example, someone who tests positive for a condition affecting 1 in 100 people is still most likely free of the condition, unless the false positive rate of the test is also less than 1%. That is because the weight of evidence provided by the test struggles to compensate for the sheer rarity of the condition.

The same reasoning casts a shadow over the idea of reliable earthquake prediction. That is because, even in high-risk areas like Japan, the base rate of devastating earthquakes is mercifully low. That, in turn, means that any reliable quake precursor must have a correspondingly small false positive rate. Given the processes that trigger earthquakes, the possibility of finding such a precursor seems remote indeed.

Real-life experience backs this conclusion, and has led most researchers to focus instead on earthquake forecasting: estimating the probability of quakes striking a region over a specific time. Forecasting lacks the sci-fi glamour of scientists issuing calls to evacuate a city to avoid certain disaster. It does, however, have a proven track record of saving lives, by helping identify at-risk areas and designing buildings and evacuation plans accordingly. In February 2010, an extremely violent earthquake struck Chile, registering 8.8 on the Richter scale, yet fewer than 600 lost their lives, in part because of quake preparedness. A few weeks earlier, a far weaker earthquake had torn through the densely packed shanty towns of Haiti, and claimed well over 100,000 lives.


What is 'Ask a statistician'?

A new regular column appearing in print and online in which we invite statisticians to answer burning questions put to them by members of the public. In our October issue, we ask:

According to the UK’s Department for Education, “missing the equivalent of just one week a year from school can mean a child is significantly less likely to achieve good GCSE grades”. Can this really be true?

 

Leave a Reply