Dijkstra Defeated : New Fastest Shortest Path Algorithm explained

Dijkstra Defeated : New Fastest Shortest Path Algorithm explained

Dijkstra Defeated : New Fastest Shortest Path Algorithm explained

Breaking the Sorting Barrier for Directed Single-Source Shortest Paths explained with example

Amidst the GenAI wave, we’ve got a breakthrough research paper where a new Shortest Path Algorithm has been discovered, which is even faster than our very own “Dijkstra”, an algorithm in the heart and soul of every computer science graduate.

https://medium.com/media/460d81116f8551fc37292babb894749b/href

  • Dijkstra’s algorithm gives you shortest paths from a single source in O(m + n log n) time on graphs with non-negative weights.
  • Bellman-Ford handles negative weights and limits on the number of hops, but pays with a slower O(mk) runtime if you only want paths with ≤k edges.

My new book : Model Context Protocol is live now

Model Context Protocol: Advanced AI Agents for Beginners (Generative AI books)

For decades, O(n log n) looked like a wall, known as the sorting barrier, that no SSSP (Single-Source Shortest Path) algorithm could breach on sparse graphs.

Comes : Breaking the Sorting Barrier for Directed Single-Source Shortest Paths

The new algorithm shatters that assumption. It’s the first deterministic algorithm that solves SSSP on directed real-weighted graphs in O(m log^(2/3) n) time under the comparison-addition model.

What’s wrong with Dijkstra and Bellman-Ford?

Before you deep dive in this blog post, I assume you understand Dijkstra & Bellman-Ford algorithms. If not, ChatGPT can be of great help

Dijkstra relies on a priority queue to always pick the vertex with the smallest tentative distance. That’s great until your queue swells to n items. You’re sorting them, whether you want to or not, and that’s where the n log n comes in.

Bellman-Ford, on the other hand, doesn’t sort. It just blindly relaxes all edges up to k times. This works for limited-hop paths but wastes cycles on edges that aren’t useful. It can also be needlessly slow, especially when you’re looking for shortest paths across long distances.

Both approaches end up being bottlenecked either by too much sorting (Dijkstra) or too much brute-force edge relaxation (Bellman-Ford).

A very important definition one needs to remember is that of relaxation

In path-finding, relaxing an edge (u, v) means: check if going through u is a better way to reach v. If yes, update the best-known distance to v. In Dijkstra and Bellman-Ford, you relax edges over and over until the distances stabilize.

What’s the key idea?

The new algorithm blends the structural insight of Dijkstra (using a frontier set of nodes) with the brute-force reach of Bellman-Ford (relax all edges up to k steps), but with careful pruning and no full sorting.

A few core ideas:

  • Frontier Shrinking: Instead of keeping a massive set of candidate nodes (S) to pull from, it aggressively reduces this to a much smaller set of pivots.

Pivots : These are nodes whose subtrees cover a large enough part of the graph to be useful.

  • Relaxation is still done like in Bellman-Ford, but only from nodes in this smaller pivot set.
  • Divide and conquer: The algorithm recursively partitions the graph into smaller subproblems. This keeps the total work manageable even across multiple recursion levels.

How it works: A step-by-step example

Let’s say we have 10 nodes in a directed graph. The source is node s. We’re computing shortest paths from s, and we set k = 2.

Initial state:

  • Only s is complete (distance 0).
  • All other nodes have infinite tentative distances.

Frontier setup:

  • Add s to the frontier set S.

Find pivots:

  • Relax all outgoing edges from s, but only for k=2 steps.
  • Suppose we discover that nodes a, b, and c are within 2 hops and get actual distance values.
  • These nodes are marked complete.
  • Among these, only a and b have large enough reachability trees (each covering 2+ nodes). These become pivots.

Recursive calls:

  • For each pivot, run a subroutine to find more nodes they can reach with distance less than some upper bound B.
  • This subroutine continues relaxing edges from the pivots, updating tentative distances of new nodes they reach.

Repeat:

  • After each subproblem finishes, update the global distance estimates and frontier set.
  • Continue shrinking the frontier and repeating the process until no more nodes need updating.

Termination:

  • When all distances are known and no frontier remains, we’re done.

This recursive, bounded-partitioning strategy avoids maintaining a fully sorted order of distances, unlike Dijkstra. And it avoids mass relaxing of irrelevant edges, unlike Bellman-Ford.

Let’s use a real-graph now

We start with d(s)=0, all others ∞. S = {s} is the frontier at the top level.

Top level (frontier S = {s}), k = 2 relaxations

We perform two global relaxation rounds (like two Bellman–Ford sweeps) starting from S.

After Relaxation 1 (paths of length ≤ 1 from s)

Relax all edges out of currently known nodes (initially only s):

  • s→a → a = 0 + 2 = 2
  • s→b → b = 0 + 7 = 7
  • s→c → c = 0 + 9 = 9
s=0, a=2, b=7, c=9, d=∞, e=∞, f=∞, g=∞, h=∞, i=∞

After Relaxation 2 (paths of length ≤ 2 from s)

Now relax edges out of nodes discovered above (a, b, c):

From a (2):

  • a→d → d = 2 + 2 = 4
  • a→e → e = 2 + 4 = 6

From b (7):

  • b→e → candidate 7 + 1 = 8 → e remains 6
  • b→f → f = 7 + 3 = 10

From c (9):

  • c→f → candidate 9 + 2 = 11 → f stays 10

State after 2nd relaxation:

s=0, a=2, b=7, c=9, d=4, e=6, f=10, g=∞, h=∞, i=∞

Which nodes are finished at top level?

The algorithm finalizes (marks finished) any vertex whose true shortest path from the frontier uses ≤ k = 2 hops. By inspection:

  • a path = s→a (1 hop) → finished
  • b path = s→b (1 hop) → finished
  • c path = s→c (1 hop) → finished
  • d path = s→a→d (2 hops) → finished
  • e path = s→a→e or s→b→e (2 hops) → finished
  • f path = s→b→f or s→c→f (2 hops) → finished

So finished set after top level:
Finished = {s, a, b, c, d, e, f} with distances as above.

Remaining nodes are g, h, i — they require >2 hops from s (indeed g and h need 3 hops, i needs 4), so they are NOT finished and become the “hard” part the algorithm will handle next.

We’re at the point in our 10-node graph where, after the top-level with k=2, we have:

  • Finished (settled): s, a, b, c, d, e, f
  • Unfinished: g, h, i

Step 1. Who is the frontier now?

Here’s the rule of this algorithm:

  • The frontier in the next recursion is the set of already finished vertices that border the unfinished region.

So here, the frontier is {d, e, f}, because they are finished and they have outgoing edges into g,h,i.

  • We do not start with g,h,i directly as frontier (because they’re not yet finished, so not safe).

Step 2. First relaxation in the recursive round

From d=4, e=6, f=10, relax all their outgoing edges simultaneously:

  • d→g → candidate = 4+3=7
  • e→g → candidate = 6+2=8 → we keep min=7
  • e→h → candidate = 6+4=10
  • f→h → candidate = 10+1=11 → min=10

So after this relaxation:

g = 7, h = 10

Both g and h are now within 1 hop of the recursion frontier → they are finished in this recursive call.

Step 3. Second relaxation in recursion

Now we include the newly finished nodes {g,h} as part of the boundary. Relax their outgoing edges:

  • g→i → 7+3=10
  • h→i → 10+2=12 → min stays 10

So now: i=10 is also finished (within 2 hops of the recursion frontier {d,e,f}).

The recursion terminates here with distances:
s=0, a=2, b=7, c=9, d=4, e=6, f=10, g=7, h=10, i=10.

Summary / Intuition of what changed vs Dijkstra

  • With k = 2, the algorithm finishes all nodes whose shortest path uses ≤ 2 hops in one top-level batch (here a,b,c,d,e,f).
  • The “far” nodes (g,h,i) need more hops; they are handled in a smaller recursive problem, where you again do at most k relaxations from the now-known boundary to settle the next layer.
  • Dijkstra would instead interleave exploration node-by-node via a global priority queue. Here we do bounded relaxations that finalize whole layers (by hop-count) at once and only recurse for the remainder.

Key assumptions behind the new algorithm

  • No two paths have equal length. They enforce a total order using path length, hop count, and vertex labels, ensuring tie-breaking is always possible.
  • Constant-degree graphs. Any graph is transformed into one with degree ≤2 by replacing high-degree nodes with cycles of dummy vertices. This ensures predictable performance bounds.
  • Comparison-addition model. Only comparisons and additions are allowed, no fancy tricks or bit-level hacks. That makes the result applicable even to real weights.

Important considerations

Though the algorithm is breakthrough, certain ideas need to be kept in mind:

  • It is deterministic. It’s not relying on randomization to speed things up. That’s significant, many recent breakthroughs use randomness for gains.
  • Space usage: Though time complexity improves, the recursive structure and multiple auxiliary data structures (like the frontier set, heap-like queues, etc.) may increase space usage.
  • Not a drop-in replacement: The algorithm is complex and specialized. For small graphs or graphs where the n log n overhead isn’t dominant, Dijkstra may still be more practical.

Final thoughts

The algorithm is a clean break from old assumptions. Dijkstra was never truly optimal, it just got comfortable. It challenges that, proving you can go faster if you stop sorting and start partitioning. The blend of Bellman-Ford’s unstructured flexibility with Dijkstra’s local accuracy, all framed inside a structured divide-and-conquer setup, is what makes it clever.

We’re not rewriting the rulebook on SSSP, but we’re finally tearing out the page that says O(n log n) is as fast as you can go.


Dijkstra Defeated : New Fastest Shortest Path Algorithm explained was originally published in Data Science in Your Pocket on Medium, where people are continuing the conversation by highlighting and responding to this story.

Share this article
0
Share
Shareable URL
Prev Post

LangChain Open SWE: In‑Depth Guide to the Open-Source Asynchronous Coding Agent

Next Post

R-Zero : Train LLMs with No data

Read next
Subscribe to our newsletter
Get notified of the best deals on our Courses, Tools and Giveaways..