Skip to main content
U.S. flag

An official website of the United States government

Taking Yellowstone seismology to the classroom for some “deep learning”

September 19, 2021

Locating earthquakes in Yellowstone is a time-intensive process that requires the trained eye and extensive experience of a human analyst. But advances in computer algorithms, known as “machine learning” tools, hold promise for automatically locating earthquakes that might otherwise be overlooked, and the dawn of a new age in seismology!

Yellowstone Caldera Chronicles is a weekly column written by scientists and collaborators of the Yellowstone Volcano Observatory. This week's contribution is from Keith Koper, Director of the University of Utah Seismograph Stations and Professor at the University of Utah Department of Geology and Geophysics, and Alysha Armstrong, graduate student at the University of Utah Department of Geology and Geophysics.

Seismogram from station YTP in Yellowstone National Park on July 15-16, 2021
Seismogram from station YTP in Yellowstone National Park showing earthquakes from the swarm beneath Yellowstone Lake that began late on July 15, 2021. Each row represents 30 minutes of seismic data. Vertical red lines indicate that the amplitude of the signal was truncated, or "clipped," to avoid obscuring the signal from events above it (earlier in time) or below it (later in time).

While the automated monitoring system currently in place for detecting and processing earthquakes in Yellowstone works quite well most of the time, its solutions need to be reviewed and refined by a seismic analyst. This means that the larger earthquakes—generally over M1—get most of the attention, and smaller earthquakes, which are harder to locate, are not always processed. The current system can also struggle in situations like earthquake swarms, where there is a lot of seismicity close together in space and time. Ideally, there would be an automated system that can detect earthquakes accurately, including those that are small and occur close together in time, and process them as expertly as a seismic analyst would. Then, only the most important events would need to be manually reviewed. But the information known to a seismic analyst is hard to write down as a concrete set of rules for a computer to follow and that would work well in a large variety of situations. All hope is not lost, however. A special set of tools known as machine learning can be used to help with this problem.

Machine learning refers to computer algorithms that try to learn the statistics of a dataset of interest to answer a question about similar data that have not yet been seen. In many cases, the data that an algorithm is given are a set of features that a human thinks are important for describing the dataset. As an example, to describe a person we might rely on data that include features like height, weight, age, and hair color. The machine learning algorithm then uses these features to try to solve a “regression” problem (producing a real-valued answer) or a “classification” problem (deciding the category that an example fits into).

Feature selection is an important step, because the algorithm can only perform as well as the data it is given. Unlike humans, most of these algorithms do not have access to more data, new experiences, or past experiences to try and learn from; they can only learn from the data and feedback we present to them. As a result, it is important that the algorithms have large datasets to draw upon.  But this is also a strength! Machine learning can be a very powerful tool because it allows the statistics of a large amount of data and features to be considered—more than any human would be able to look at and make sense of on their own.

Some types of data and problems are not easy to find the right features for. These problems can be addressed by a special type of machine learning known as “deep learning,” which does not need to be told which features are important. The “deep” part of the name is because raw input data (not a selection of features) goes through several sequential levels of processing that aim to distill and then build up more and more complex representations of the data, like a human selecting different features they find important, before arriving at an output relating to the question being asked. These types of algorithms have been shown to do very well for problems that are easy for humans to solve but find hard to describe. For example, they can be very good at identifying characteristics in images—like whether or not a picture contains a cat.

Since the knowledge of a seismic analyst can also be hard to put into a set of features, maybe we can use deep learning to detect and process earthquakes in Yellowstone! To do this, we train a deep learning algorithm with huge amounts of ground motion data from Yellowstone and Utah. Training consists of giving the algorithm examples of earthquakes and non-earthquakes (noise) and asking the algorithm to decide which is which. If the computer doesn’t answer correctly, the model is updated to try to improve the results—this represents the algorithm “learning”. The training process goes through this step many, many times, and the algorithm begins to learn to recognize real earthquakes from noise.

We use examples from Yellowstone and Utah so the algorithm can learn from a large variety of different signals. Once the model is performing well at the task of determining if an earthquake is present, it is then tested on a similar dataset—one that was not part of the training—to make sure the algorithm did not just “memorize” the exact training data. If the model is repeatedly successful, it can then be applied to Yellowstone data it has never seen before to find earthquakes that might not have been manually reviewed by an analyst.

Once earthquakes are found, they can be put into additional deep learning models that do other types of processing, like pick the exact time earthquake phases arrive. This, in turn, can aid with locating and determining the magnitudes of smaller earthquakes—data that might not otherwise be collected with human analysis alone. With deep learning, we hope to produce a better automated system that can be used not only on real-time data, but also on existing archives of seismic data to create a larger catalog of Yellowstone earthquakes that can be used in scientific studies. Machine learning holds the promise of a new age in seismology—and we hope to see that age dawn soon in Yellowstone!

Get Our News

These items are in the RSS feed format (Really Simple Syndication) based on categories such as topics, locations, and more. You can install and RSS reader browser extension, software, or use a third-party service to receive immediate news updates depending on the feed that you have added. If you click the feed links below, they may look strange because they are simply XML code. An RSS reader can easily read this code and push out a notification to you when something new is posted to our site.