Shortest path algorithms sit at the heart of modern graph theory and many of the systems that move people, data, and goods around the world. 

After nearly seventy years of relying on the same classic approach, researchers have now created a faster way to solve this task.


EarthSnap

The new method beats the long standing champion, Dijkstra’s algorithm, on networks that may contain millions of connections. 

It comes from a collaboration across several leading universities, and it challenges long held beliefs about what is possible in path finding.

Shortest path algorithms

The work was led by Ran Duan, a tenured associate professor at Tsinghua University in Beijing. His research focuses on graph algorithms and data structures, the mathematical tools behind many modern networks.

Shortest path problems show up in navigation apps, internet routing, power grids, and even in planning moves in certain games. 

All of those systems can be modeled as a graph, a network of points connected by lines with numerical costs.

“Shortest paths is a beautiful problem that anyone in the world can relate to,” said Mikkel Thorup. He is a computer scientist at the University of Copenhagen.

Finding one good route is hard enough, but many systems need distances from one starting point to every other node in the network. 

Computer scientists call this the single-source shortest paths, the problem of computing distances from one node to every other node.

What old algorithms failed

Back in 1959, Edsger Dijkstra published a method that builds shortest paths by repeatedly expanding the closest frontier node. 

Modern implementations use a data structure called a priority queue, a tool that keeps items ranked by a numerical distance.

Each time the queue picks the next node, it performs work similar to keeping a list of many items in sorted order. 

Recent theoretical results show that if an algorithm must output nodes fully sorted by distance, Dijkstra’s approach is essentially as fast as possible.

This link between path finding and sorting created a sorting barrier, the belief that no algorithm could beat sorting time on general sparse graphs. 

For roughly four decades, most experts assumed that any further speedup would require new hardware tricks rather than a new algorithmic idea.

For large sparse graphs, classic analyses show Dijkstra’s algorithm taking about m operations plus an extra factor proportional to n times log n. 

That log n factor comes from the need to keep the frontier effectively sorted inside the priority queue.

Shortest path algorithm sorting

The new algorithm attacks the single-source shortest paths problem directly on directed graphs with non-negative edge weights. 

It then works with small groups of representative nodes from each region of the network rather than sorting everything.

At any moment, the algorithm keeps track of a frontier, the set of nodes whose tentative distances are known but whose neighbors need exploring. 

Earlier algorithms scanned that entire set again and again, but the new approach clusters nearby frontier nodes and works with a handful of representatives.

To decide which representatives are worth exploring, the algorithm borrows a few carefully chosen steps from the classic Bellman-Ford algorithm. 

Those steps temporarily scan all edges around the frontier to spot especially influential nodes that lie on many of the most important paths.

On sparse graphs, the new method has a running time that grows slightly more slowly than m plus n times log n. 

That improvement breaks the longstanding sorting barrier and shows that Dijkstra’s algorithm is not the final word on single-source shortest paths.

Why this matters beyond theory

In the short term, this advance will show up first in research code and specialized tools that analyze huge networks offline. 

Large web companies, scientific labs, and cloud providers run massive graph analytics jobs where even a modest speed gain can save time and energy.

This work earned a Best Paper Award at one of the top theory conferences in computer science.  That recognition reflects a broader sense that sorting style bottlenecks in other algorithms might also be more fragile than they once appeared.

Conceptually, Duan and colleagues showed that it is possible to find exact shortest paths without insisting that every step visits nodes in distance order. 

That idea, using partial orderings instead of full sorting, may inspire similar shortcuts for other problems that juggle large collections of weighted options.

Over time, simpler versions of this approach could be folded into standard programming libraries and speed up routing, recommendation, and scheduling software. 

For anyone learning algorithms today, this result will likely join the standard stories of how computer science keeps pushing past old limits.

The study is published in the Proceedings of the 57th Annual ACM Symposium on Theory of Computing.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–