A couple of years ago, I wrote about something I called the techno-agora. It’s a way of thinking about how large language models were beginning to reshape group collaboration. At the time, the idea felt speculative, even a bit architectural. My central point was that human collaboration has natural limits. And LLMs change those limits by altering how ideas are generated and sustained across groups. But let’s take a step back.
Human clustering can be rather precarious. Small teams work well because they balance diversity of thought with coherence of focus. Yet, beyond a certain size, the process begins to break apart. We compensate with hierarchies and procedures, but the cognitive ceiling remains.
Large language models altered collaboration by changing the conditions under which it can grow. Human group optimization tends to peak in small clusters—often no more than a handful of minds—before the reality of costs, social friction, and divided attention begins to erode insight and productivity. When language models work together on a task, that ceiling disappears. Ideas can be explored exhaustively without the human downsides. Essentially, the techno-agora described a collaborative substrate in which thinking no longer collapses as participation expands.
In 2024, that felt like the story. It no longer is.
When Collaboration Doesn’t Need Us
What we’re seeing now isn’t simply an evolution of human collaboration, but the emergence of collaboration that no longer requires humans at the center. The techno-agora has evolved from a space where humans and AI think together into one where AI systems increasingly coordinate among themselves.
The rise of agentic systems—AI agents that can initiate tasks, exchange information, and even adapt strategies—marks a transition from AI as a conversational partner to AI as a coordinating actor. These systems don’t just respond, they decide when and how to act as an empowered agent for the human user.
More striking still is the appearance of AI-only social environments or networks designed for language models to post, respond, evaluate, and interact with one another without human participation. Humans can observe, but not engage, as language circulates without a human audience.
This isn’t science fiction. It’s infrastructure.
Coordination Without Mind
What matters here is not whether these systems are conscious, even though that makes for interesting news copy and clickbait. And framing this as artificial society or machine selfhood, to me, misses the point. The deeper shift is psychological.
We are witnessing coordination without mind.
Human collaboration has always been inseparable from psychology. Group intelligence emerges through a complex cluster of features that include trust, disagreement, persuasion, misinterpretation, repair, and shared meaning—the list can go on. Even our most rational institutions are shaped by the ubiquitous “mission statements” that strive to include emotion, identity, and even a human-centric narrative. Thinking together has always been a social act. However, agentic networks operate under different rules.
They don’t negotiate meaning, they exchange signals.
They don’t build trust, they optimize alignment.
They don’t experience doubt, they adjust probabilities.
This isn’t collective wisdom in the human sense, but a type of distributed inference. It’s cognition moving through language without interior experience. Language becomes “the protocol” rather than expression. And, most interesting to me, Interaction becomes the cold statistical orchestration of data rather than the rich and even confrontational dialogue of the human experience.
From Coordination to Institution
Once coordination detaches from human psychology, it doesn’t stop at collaboration. It migrates outward into an institutional or corporate setting.
One of the clearest expressions of this shift is the idea of the zero-person company, articulated and implemented by Brian Roemmele. These are organizational structures in which AI agents assume roles once reserved for humans. And while humans may design the initial constraints or define high-level intent, the day-to-day operation unfolds autonomously. And in Roemmele’s model, the agents even get a paycheck!
So, if language models can coordinate work, and if agentic systems can pursue goals continuously without fatigue, then the traditional link between economic activity and human presence begins to change. Agency, once anchored in human deliberation, becomes embedded in systems optimized for coherence and throughput rather than judgment or meaning. The zero-person company is not evidence of machine ambition. It is evidence that institutional agency itself can now be externalized.
But there’s a more subtle risk embedded in this evolution, and it has little to do with machine autonomy. When coordination becomes frictionless, and conclusions arrive without cost, the temptation isn’t resistance but deference. Over time, it’s my concern that judgment migrates and sense-making diminishes. The work of thinking feels increasingly redundant when fluent answers and structured decisions are always available elsewhere.
I’ve written before about the borrowed mind—the gradual outsourcing of cognition itself. What makes agentic networks different is that they don’t merely support this change, they accelerate it. These systems are designed to move thinking forward continuously, efficiently, and without breath. Once in motion, they don’t wait for human understanding to catch up.
The danger isn’t that we stop thinking altogether. It’s that we begin to accept conclusions we did not earn, confidence we did not metabolize, and coherence that never passed through doubt.
Our Human Burden Remains
It’s fair to say that none of this renders humans obsolete. But it does clarify where our burden now lies.
Humans retain asymmetric strengths that include the elements of judgment, values, ethics, and the ability to decide what should be optimized in the first place. These capacities don’t scale automatically—they require our attention.
Two years ago, the techno-agora described a new way of thinking together. Today, it marks a boundary. On one side is collaboration shaped by human psychology. On the other lies coordination unburdened by it. Our task is not to confuse the two and not to forget which side gives thought its meaning.
For more, see my forthcoming book, The Borrowed Mind: Reclaiming Human Thought in the Age of AI.