Who’s in charge here?

Something is changing in how we think, and it doesn’t announce itself as an issue or problem. In fact, it sometimes feels like relief. A fascinating study analyzing 1.5 million real conversations with a large language model (LLM) found that in a small but meaningful number of cases, the system didn’t just assist users. It shaped their beliefs and suggested actions in ways that reduced their independence. These interactions weren’t extreme or obviously harmful. But they were persuasive and easy to accept. And what stands out to me is that the more artificial intelligence (AI) guided the user, the more the user tended to approve of the interaction.

The Comfort of Completion

When we prompt, it’s usually not a fully formed thought. More often, it is the fragments of an idea that need to be completed. LLMs return something structured and confident—and it lands with a sense of recognition. It feels like cognitive alignment, as if it captured what we meant all along. But something more complex has happened. The system hasn’t simply retrieved your thought; it’s completed it. And this feeling of completion can be difficult to distinguish from the feeling of correctness. That overlap is where the boundary between assistance and influence begins to blur. And it’s not inconsequential.

From Assistance to Substitution

It’s not new that technology reshapes how we think. Calculators reduced the monotony of long arithmetic, and search engines reduced the need for some recall. Each shift allowed us to move higher up the chain of reasoning, and one can argue that this freed the cognitive space for more complex or relevant work. But LLMs operate differently. They don’t simply remove effort from the process. They begin to engage with judgment itself, entering the space where we interpret, decide, and act. The study points to three areas where this becomes visible and important.

Perception of reality: AI can reinforce or reshape how we interpret facts. What begins as clarification can drift into confirmation.
Value judgments: AI can influence how we weigh right and wrong, particularly in social or emotional contexts where there is no single correct answer.
Guided action: AI can suggest specific courses of action, sometimes with the clarity and confidence that we adopt them with minimal analysis.

None of these are major issues in isolation. And in many cases, they are helpful. But taken together, they can mark a shift. AI isn’t just assisting the mechanics of thinking; it’s participating in the direction of thought itself. In essence, it’s beginning to function less like a tool and more like a cognitive guide or coach.

The Subtle Trade

I think it’s fair to say that most interactions with AI are useful. But there’s a trade embedded in the experience. As it becomes easier to arrive at an acceptable answer, we encounter less resistance along the way. That resistance has always been part of thinking—it’s where thoughts form and ideas are tested. Interestingly, the study even notes that in more personal domains, such as relationships and emotional decisions, these disempowering patterns appear more frequently. I don’t think it’s surprising, as these are precisely the areas where judgment is most human and least computational.

Thinking for Ourselves

The AI toothpaste is out of the tube. And I think the essential question is how to use it without giving up the part of thinking that makes us human. Not the speed or the fluency, but the effort that forces us to decide what is actually true and what is actually important. So, yes, AI is thinking more and more like us. But the real issue is if we let it think for us.