When I was in college, I became fascinated with one of the consequences of Einstein’s relativity. Simply put, if something could travel faster than the speed of light, the ordering of events would stop making sense.

Imagine a bullet that travels faster than its own light. In a scenario like this, you’d theoretically see a bird fall from the sky before you saw the hunter pull the trigger. Cause and effect would be reversed—not because the events changed, but because the geometry of observation did.

The physics made sense. It told you that causality, the core assumption of both science and experience, wasn’t absolute but constrained by structure. And if you violate the structure, the cognitive chain snaps.

Is AI Distorting Reality in the Same Way?

I argue that something similar is happening in today’s information environment, and AI is at the center of it. My sense is that we are entering a period where the causal structure of knowledge itself is becoming unreliable.

Now, we’re not changing the laws of nature but we’re changing our dependency on it. Today, we have ubiquitous technological output that doesn’t have any underlying causal connection to a lived reality.

Think about what large language models sometimes produce. A hallucinated answer isn’t a mistake. It’s a fully “baked” response that arrives cloaked in believability.

Sycophancy does the same thing in a social context. The LLM doesn’t really weigh your argument and agree. It reflects your expectation back to you.

Deepfakes are another powerful example. They manufacture evidence itself—evidence of something that didn’t happen or never existed.

Separately, each is a problem. Together they’re something else. They break the link between what we accept as knowledge and the causal work that’s supposed to produce it.

In physics, non-causal reality, to say the least, is exotic. It takes extreme conditions to get there. In the era of AI ecology, it’s becoming routine. Conclusions arrive, agreement shows, and evidence appears without anything having actually happened.

And here’s what I think makes this version particularly dangerous: When relativity scrambles causal ordering, you feel the dissonance—something’s wrong and you know it. But when AI fabricates a confident answer or a deepfake contrives a moment that never occurred, there might be no dissonance at all. In fact, it might be comforting. The forgery is fluent enough that our cognition often accepts it as real.

I’ve called this borrowed certainty before. But what we’re borrowing now isn’t just AI’s conclusions. It’s the causal architecture underneath them. And for us, that architecture can be assembled without actual causes. Yet with AI, the feeling of knowing is completely decoupled from the reality of it.

The optimists say more information means more clarity. I’m skeptical. You can’t fix contaminated evidence by just adding more or scaling it. That’s not abundance.

Back in PY 251, my college physics class, I learned that causality has limits. Today, AI is teaching a very different lesson—not that causality can be broken, but that it can be convincingly faked. For someone who has spent a career trusting the link between evidence and conclusion, that’s a discovery that keeps me up at night.