Imagine a large language model like ChatGPT or Grok that writes a perfect email for you. It feels a bit like a superpower that provides instant clarity, served up with tap of a finger of your right hand. Could it be any easier? Yet in that moment of convenience, something shifts. You’ve outsourced a small piece of your thought, and it felt effortless. Let’s dig in.
Have you even heard of the French philosopher Jacques Ellul? I hadn’t, but I recently stumbled upon his work and almost instantly made a connection to today’s technology. Yep, he saw this coming long before LLMs. He argued that technology isn’t neutral. It doesn’t wait patiently for us to decide how we’ll use it. It carries its own relentless drive for efficiency that he called “technique.” Once a technique exists, it evolves on its own, shaping how we live, think, and even what we value.
About a decade ago, The New Atlantis nicely articulated how Ellul presented technology as a self-reinforcing environment, not just a set of tools. Think of it as a new atmosphere that envelops human life. LLMs are that atmosphere made visible. They don’t just help us think, they begin to change what thinking itself feels like. And the real danger isn’t loud or dramatic. It’s quiet. It’s seamless. It’s easy to miss. Sound familiar?
The Hidden Logic of AI
Ellul believed technology follows one rule that I paraphrase here: If it can be done, it will be done. Once a capability appears, it accelerates itself, and that’s regardless of ethics or intent. And that sentiment, expressed in a single sentence, may be one of today’s most important and concerning ideas.
You can see this with AI. GPT-3 was a novelty. GPT-4 slipped into classrooms, offices, therapy sessions, even personal journaling. Soon it won’t be an “app” you open. It will be the background of life that’s unquestioned and invisible. Today, OpenAI’s CEO Sam Altman and acclaimed Apple designer Jony Ive are working together on what I believe is that “next step” to establish AI as that background technology that Ellul called technique.
And this is, exactly, technique evolving and at work. LLMs are built to optimize for coherence, relevance, and efficiency. But that statistical logic quietly dictates what we see, how we phrase our questions, and even what we come to believe is “normal.” As The New Atlantis notes, Ellul saw technology as a force that narrows the space of true choice. And it’s not done by overtly restricting freedom, but by making certain paths effortless.
Are You Slipping Into Machine-Shaped Thinking?
Ellul warned that technique doesn’t just change individuals, it also reshapes entire cultures. Large language models are a clear proof of that. But don’t take my word for it. Ask yourself these four questions:
When was the last time you really wrestled with complexity? Or did you smooth it out with a quick AI-generated summary instead of sitting with the messy ambiguity of a book, an idea, or a problem?
Are you starting to lose your own voice? If AI drafts your reports, emails, or presentations, do you still feel the same confidence in articulating ideas without its help? Are the muscles of writing and reasoning becoming atrophied?
Do you mostly hear the “safe” answers? AI is trained on vast but biased datasets. Have you noticed how its responses often reinforce mainstream perspectives, making everything feel a little too rounded and too agreeable?
Do you feel like you know something, without really knowing it? AI’s fluency can feel like comprehension, giving you the sense of understanding without the slow work of truly learning. That’s technology’s greatest trick! It is making its control feel very natural.
So, how did you do? If a few of these questions hit close to home, you’re not alone. This is how the shift begins and how this process feels like the most natural thing in the world.
But beyond individual minds, the cultural risk also exists. As The New Atlantis article pointed out, technique standardizes language itself, along the way flattening diverse forms of expression into a homogenous (maybe even bland), optimized style. When LLMs write our emails, our news, even our poetry, they do more than just mirror our culture; they quietly redefine it in the statistical “language of the machine.” And maybe that’s the new “ghost in the machine” that we should be worried about.
Ellul’s Challenge for Us
Ellul didn’t believe we could halt technological momentum. The “tide of technique” can’t be stopped, but it can be resisted through small acts of awareness. With AI, maybe that resistance begins without intention. It means using it to spark deeper thinking rather than allowing it to replace the effort of thought entirely. It means questioning its answers, cross-checking them, and staying curious, particularly when the output feels polished and complete. And, I’ll say it again, it means embracing the slow, sometimes frustrating struggle where real reflection thrives. And that’s the cognitive friction that defines us.
Note to self: The true risk isn’t that AI will replace human thought outright. It’s that it will quietly redefine what thinking itself is, until we can no longer tell the difference.