A flash of insight. A new angle you hadn’t considered. A sudden curiosity sparked by a chatbot’s reply. That’s what futurist John Nosta calls the Cognitive Corridor, a moment of mental ignition triggered by AI. But beneath that glow, researchers say something else may be happening: our brains might be losing their edge.
As tools like ChatGPT, Gemini, and other large language models (LLMs) become part of how we search, read, and even think, scientists are beginning to track a decline in cognitive engagement. New studies suggest that the ease of these tools may come at a cost: less brain activity, weaker memory, and over time, a shrinking ability to learn without help.
At the heart of this shift is a growing dependence on AI to fill in the gaps. While AI offers a fast path to information, it can also shape our thoughts in subtle ways. Nosta sees these interactions as moments of shared illumination, but warns that too many people are starting to live in that glow instead of passing through it.
When Answers Come Too Easily
The idea of the Cognitive Corridor is rooted in a simple metaphor. Imagine driving at night. Your headlights only show what’s right in front of you, your current understanding. Then, for a second, another light flashes across your path, revealing something just outside your view. That moment of unexpected visibility is what AI can provide when it suggests a new angle, reframes a question, or offers a comparison you didn’t ask for.

Nosta, quoted in Popular Mechanics, says this spark can feel like a revelation. But it skips over the messy, difficult parts of thinking, like sorting through confusion or making mistakes. “The Corridor is a gift and not a habitat,” he clarified, emphasizing that while these AI-fueled moments can be useful, they shouldn’t replace our own mental processes.
The convenience of instant insight is leading many users to prefer AI tools over traditional methods of searching and researching. A survey conducted by Adobe found that one in four respondents already favor using ChatGPT over Google. Even those still using Google are often reading AI-generated summaries powered by Gemini. That means they’re getting filtered, pre-assembled ideas, often without realizing it.
The Brain Is Starting to Disengage
For all the excitement around LLMs, new research is pointing to serious consequences for how our brains function when we rely on them too much. A recent MIT study put three groups to the test. One used only their own thinking to write essays. Another used traditional search engines. The third used AI tools like ChatGPT. The results showed clear differences in brain activity.
The “brain-only” group showed the strongest neural connectivity. Those who used AI showed the weakest. And even when the AI group was asked to write a second essay without any tools, their brain engagement stayed low. They also had trouble recalling what they had just written, more so than either of the other groups.
According to the researchers, this is a sign of “cognitive atrophy”, a weakening of the brain’s learning muscles. The smoother the learning experience becomes, the less effort the brain puts in. And without effort, there’s less retention, less analysis, and eventually, less understanding.
Don’t Replace Thinking—Support It
Despite the warnings, Nosta doesn’t argue for avoiding AI. Instead, he believes it’s about using it wisely. The moment of shared insight that AI can provide is valuable, but it should be something you take with you, not stay inside. The danger comes when users stop thinking critically and start leaning on AI to do the work of learning.
Even with ChatGPT handling billions of queries daily and AI summaries appearing in Google results, the responsibility to think still falls on us. What matters is the intent behind how we use these tools. When AI is treated as a shortcut, it can slowly replace the brainwork it’s meant to support. But when used thoughtfully, it can open up ideas we might never have reached alone.