Illuminating earthquakes
Artificial intelligence is helping to identify previously undetected seismicity. Greg Beroza discusses how this approach can aid earthquake monitoring and why it is essential to the field of seismology
Earthquakes occur almost continuously, but most are too small to notice. For decades, geoscientists have used computers to analyse digital seismic signals, but many of the smallest earthquakes are undetected. While these tiny seismic events themselves pose little threat of damage, their waveforms hold information that can aid monitoring of larger, more destructive earthquakes. By locating and analysing microseismicity, we can illuminate the three-dimensional structure of fault systems and thereby better understand and anticipate larger fault movements.
Expanding catalogues
Greg Beroza, Professor of Geophysics at Stanford University, USA, explains that because earthquakes can occur at any time, we must continuously scan seismic data for wave arrivals that might be from earthquakes – a job that was initially done by seismic analysts.
“Once seismic data became digital, the work of seismic analysts was augmented by simple computer algorithms that search for sudden increases in the strength of ground shaking. The measurements still required expert analysts to review.”
Now seismologists are turning to artificial intelligence (AI) and machine learning (ML) to identify vastly greater numbers of earthquakes in the waveform data. “We have replaced the simple algorithms with more complex ones that do a much better job of discriminating earthquake signals from the background noise. How much better? Well enough that we often find ten times as many small earthquakes as we did previously.”
Seismology is particularly well suited to AI and ML applications because of the enormous volumes of seismic data and our existing relatively good understanding of the science that underpins these data.
“Seismology deals with rich signals whose properties are governed by well-understood physical principles. Machine learning, and particularly deep learning, is ‘data hungry’, which means that it needs to be exposed to many, many, many examples of the things it is trying to understand and classify. Computer vision, a form of AI that enables computers to recognize objects from images and videos, for example, only took off when standardized sets of accurately labelled data sets were created for ML models to be trained on. We have a tremendous advantage in seismology in that expert analysts have compiled and vetted tens of millions of ‘labels’ for our data. Labels in this context refer to known examples of phenomena that we would like to quantify, such as the arrival times of seismic waves based on the wiggles in a seismogram. Having these data sets makes it much easier to train effective AI models.”
Greg estimates that ML techniques are uncovering a factor of five to ten or even more earthquakes than are typically recorded in standard catalogues. For example, his team have uncovered hundreds of thousands of earthquakes thought to be induced by fluid injection in the US State of Oklahoma. Such greatly expanded data catalogues offer the potential to transform our understanding of faults and earthquake processes.
“These small earthquakes illuminate the faults that they occur on to an unprecedented level of detail. The complex geometry, and the sequencing and interaction of earthquakes that occur on them, is an area that will see rapid progress in the coming years, and it is sure to teach us new things about earthquakes.”
Varied applications
Greg and his team are currently working on a variety of projects that use artificial intelligence applications for seismic data. For example, they are using graph neural networks, a ML approach that is designed to work with heterogenous data, to aid earthquake phase association – the fundamental task of assigning arrival times to seismic phases, such as P- and S-waves, measured at different seismic stations so that they can be traced back to a common earthquake source.
“It’s sort of like connecting the dots. It sounds simple, but with all the challenges of real data, it’s surprisingly complex.”
In another project, the team are trying to extend approaches for measuring the arrival times of seismic phases to distances beyond 100 km from the earthquake source. “The shapes of the wiggles at those distances are strongly distorted by interaction of the waves with Earth’s heterogeneous upper mantle, so they’re more difficult to interpret – for seismologists and algorithms alike, which makes for a nice challenge.”
And Greg collaborates with teams around the world to apply these methods to a variety of complex problems. “French volcanologists, for example, led by Dr Lise Retailleau at the Institut de Physique du Globe de Paris, France and others, are using our methods for monitoring dangerous volcanoes in Mayotte, in the Horn of Africa, and in the Lesser Antilles. Their goal is to find tell-tale signs of seismic arrest or cessation before future eruptions.”
DAS allows measurements of wavefields in challenging environments, such as under cities and on the ocean floor, and provides very dense measurements of the seismic wavefield in all environments, which will enable new approaches
Forecasting
Understandably, current analyses of seismic data are retrospective. The key question is whether AI applications will ever be able to help us forecast earthquakes. “There are many facets to it. There are some indications that AI will be useful for improving probabilistic forecasting. For example, in Oklahoma we’ve shown (retrospectively) that the improved monitoring and additional data that our improved catalogues provide could have substantially increased the relative forecast probability of magnitude four and larger induced earthquakes by about a factor of ten relative to the background rate. The absolute probabilities are still low, however, so it represents modest progress.
“Forecasting of simulated earthquakes in the laboratory seems to be quite successful; however, the Earth is a tougher challenge because it’s a more complex, heterogenous, and interactive system than idealized laboratory experiments. The gains in forecasting skill that I have heard about to date based on ML are marginal. We’re just getting started in attempts to do this, however, and I think it’s an exciting research direction that will be pursued aggressively. Even so, there are good reasons to believe that earthquakes are intrinsically difficult to forecast – no matter what methods are applied to the task.”
A data science future
Greg suggests that data science methods, which include AI and ML, are critical to the future of the geosciences because these methods are ideally suited to developing new insights into the types of complex systems that are common in nature. And as our ability to monitor Earth processes rapidly grows, so too do our data sets.
“With advances in technology, we are reaching a point where we can’t directly ‘look’ at all the data that we gather. It’s even getting difficult to move the data around. Seismology needs the methods of data science to extract information reliably from these important new data sources. Without it, we’d be overwhelmed. Other fields within the geosciences are sure to follow suit. In some cases, such as the analysis of surface topography and images from space, they are there already.”
Greg specifically mentions fibre-optic seismology, or Distributed Acoustic Sensing (DAS), as an emerging technology that is having a huge impact in seismology. Here, new or existing fibre-optic cables, such as those used for the internet, television and telephone services, are repurposed to measure ground motions. Seismologists can monitor changes in the way that pulses of laser light travel within the cables as the fibre experiences tension and compression induced by ground motions.
“DAS allows measurements of wavefields in challenging environments, such as under cities and on the ocean floor, and provides very dense measurements of the seismic wavefield in all environments, which will enable new approaches.”
These inexpensive devices can act as densely spaced seismic arrays. They enable the collection of massively greater volumes of data that will require AI approaches to decipher. For that reason, Greg is very excited by the application of AI to seismology. “Despite the fact that it might seem faddish now, it’s having a real impact on the field and I am convinced that we’re still in the early days of its development.”
Greg Beroza
Professor Gregory Beroza is Professor of Geophysics at Stanford University, California and Co-Director of the Southern California Earthquake Center, USA.
Interview by Amy Whitchurch
Beroza, G. Illuminating earthquakes. Geoscientist 32 (3), 26-28, 2022; https://doi.org/10.1144/geosci2022-025