A new mathematical shortcut has cut the core calculations inside earthquake ground-shaking simulations by about 1,000 times without sacrificing accuracy.
That leap in speed could make it far easier for scientists and cities to model how strongly the ground would shake in a future earthquake, helping communities prepare for damage even though quakes themselves still cannot be predicted.
Inside a typical risk study, computers solve millions of linked wave calculations, then repeat them for each new guess.
At Stevens Institute of Technology (SIT), a math team built a shortcut that keeps answers while dropping most of the work.
Driving the design, Dr. Kathrin Smetana focused on the signal pieces that risk maps actually use, and accuracy held.
Faster runs do not reveal the next quake, but they can tighten the range of realistic shaking a community plans for.
New models for earthquake simulations
On the surface, two neighborhoods may sit miles apart, yet the ground beneath them can behave very differently.
In January 2026, researchers tied to the project stressed that hidden materials can change from one city block to the next.
“You may have layers of solid rock, or you may have sand or clay,” said Smetana. When waves cross from hard rock into softer sediment, they often slow down and grow larger, raising the local threat.
To map those underground layers, scientists start with seismograms, wave records that show ground motion over time.
Computer models generate their own seismograms, then Full Waveform Inversion, a process that matches simulated and recorded waves, updates the subsurface guess.
“You compare the data from your computer simulation with actual data that you got from earthquakes,” said Smetana.
After enough rounds, the tuned model can reveal buried zones that steer waves toward some sites and away from others.
The slow loop
With earthquakes happening so often, agencies want ground models that stay current instead of sitting unchanged for years.
Global counts from the U.S. Geological Survey put the average near 55 quakes daily, or about 20,000 yearly.
Each update in Full Waveform Inversion forces the computer to solve a huge model, then solve it again.
When a single run can take hours on a cluster, teams cannot afford the thousands of repeats they need.
Keeping low notes
Most field data is cleaned before analysis, and that step removes the fastest wiggles that drive up computing time without improving the match.
To strip out that excess motion, seismologists apply low-pass filters, tools that remove the highest-frequency shaking before comparing events.
By matching only what survives that filter, the new method avoids chasing details that the recorded signals discard.
High-frequency problems still demand more math, so the trick works best for the broader patterns used in risk mapping.
A smaller system
Instead of solving every equation directly, the team built a small stand-in model that could mimic the full one.
They called it model order reduction, a shortcut that keeps only the most important math, then reuses it.
During setup, a handful of full simulations taught the reduced model what kinds of wave behavior were possible.
Later runs used that condensed library, producing new results far faster while staying stable across varied subsurface speeds.
Testing earthquake simulations
For a proving ground, the researchers used a two-dimensional subsurface model of a seismically active region in the Netherlands.
Across that virtual terrain, they changed the assumed wave speeds and still matched seismograms with far fewer unknowns.
Stability mattered because wave models can explode when shortcuts drop the wrong pieces, producing fake motion and timing.
Results like that make it easier to try the approach in other regions, even when geology varies sharply.
Risk maps get faster
Better subsurface models feed into the shaking forecasts that guide retrofits, land-use rules, and emergency drills.
In the United States, annual earthquake losses average about $14.7 billion, reflecting how many people live in hazard zones.
When simulations run quickly, analysts can test more rupture scenarios and keep local maps updated as new data arrives.
Cities still need strong building enforcement, yet faster modeling can show where limited dollars reduce risk the most.
Future of earthquake simulations
Even a faster model needs good inputs, so the method still depends on where sensors capture clean wave records.
Sparse stations can miss key rock boundaries, and that uncertainty can keep simulations from matching what people actually feel.
“If you get a good picture of the subsurface, you have a better idea of assessing the risk of future earthquakes,” said Smetana.
New regions will still require careful setup and validation, since a shortcut cannot correct a bad starting model.
Fast wave simulations can make underground imaging practical enough for routine use, not rare runs on shared computers.
Next steps include testing the method in full three-dimensional settings, where coastlines and deep basins complicate the waves.
The study is published in SIAM Journal on Scientific Computing.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–