Even Grok knows that neurosymbolic hybrid power is the future

Claude Code, an impressive and possibly game-changing “coding agent” for programmers to write code faster is the single biggest advance in AI since the LLM.

And the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close.

That changes everything.

The source code leak proves it. Tucked away at its center is a 3,167 line kernel called print.ts.

print.ts is a pattern matching. And pattern matching is supposed to be the *strength* of LLMs.

But Anthropic figured out that if you really need to get your patterns right, you can’t trust a pure LLM. They are too probabilistic. And too erratic.

Instead, the way Anthropic built that kernel is straight out of classical symbolic AI. For example, it’s in large part a big IF-THEN conditional, with 486 branch points and 12 levels of nesting — all inside a deterministic, symbolic loop that the real godfathers of AI, people like John McCarthy and Marvin Minsky and Herb Simon, would have instantly recognized.

Putting things differently, Anthropic, when push came to shove, went exactly where I have said for 25 years that the field needed to go: to Neurosymbolic AI.

That’s right, the biggest advance since the LLM is neurosymbolic. AlphaFold, AlphaEvolve, AlphaProof, and AlphaGeometry are all neurosymbolic, too; so is Code Interpreter; when you are calling code, you are asking symbolic AI to do an important part of the work.

Claude Code isn’t better because of scaling. It’s better because it is neurosymbolic. Anthropic accepted the importance of using classical AI techniques alongside neural networks — precisely the marriage I have spent my career advocating.

It’s both massive vindication for me personally (see my 2019 debate with Yoshua Bengio for context, or my 2001 book, The Algebraic Mind), and for the hundreds of other researchers who have stood by neurosymbolic AI even when prominent people like Geoff Hinton wrongly disparaged us for years.

Still, Claude Code ain’t perfect, or even close.

What we really need to do to get trustworthy AI rather than the current unpredictable “jagged” mess, is to go in the knowledge-, reasoning-, and world-model driven direction I laid out in 2020, in an article called The Next Decade in AI, in which neurosymbolic AI is just the *starting point* in a longer journey.

Read that article if you want to know what else we need to do next. The first part has already come to pass. In time, other three will, too.

Meanwhile, the implications for the allocation of capital are pretty massive: smartly adding in bits of symbolic AI can do a lot more than scaling alone, and even Anthropic has now discovered (though they haven’t acknowledged publicly) scaling is no longer the essence of innovation.

The paradigm has changed.

P.s. for a good recent review of neurosymbolic AI, read this: