Last autumn, I was speaking to a friend about the relatively sudden ubiquity of Large Language Models (LLM) technology, and of the various ways we had found ourselves using it. When I told him that aside from an app I occasionally used to create transcriptions of interviews, I hadn’t encountered any machine-learning software valuable enough to pay actual money for, he took out his phone and said he wanted to show me something.
He opened up Gmail, and clicked on a message chain near the top of the screen, tilting his phone towards me so that I could read it. It was someone asking about meeting for coffee. The response was not from my friend, but from someone called – well, I don’t remember the name, but let’s say “Alex”.
Alex introduced himself, or herself, as my friend’s assistant, and said that he (my friend) would be available to meet for coffee on a certain afternoon later that month, if that suited the emailer. The usual back and forth ensued, and by the end of it the meeting was all set up, and locked into the Google calendars of both participants.
My friend did not, he said, have an assistant. Or he did, but the assistant was not a human being but rather an AI “agent” for which he paid a monthly subscription fee. This bot had access to his email account, and to his work schedule, so that when people emailed about setting up meetings, he didn’t have to go through the rigmarole of replying and finding a time that worked. (He had deliberately chosen the name Alex, or whatever it was, for its gender non-specificity.)
He had, he said, previously been using some kind of basic scheduling software, whereby he would send people a link and they would click on time slots that suited them, and they would in this way come to an arrangement. But in the relatively rarefied international business circles my friend moved in, he couldn’t help suspecting that such an approach was viewed by his colleagues and associates as déclassé, even downright barbaric.
Mightn’t the same (or even worse) be said, I asked, of using a fake AI assistant? It might, he replied, if people knew he was using one. But they had no reason to suspect that he was – and this would continue to be the case, he suggested, until such time as their usage become ubiquitous, at which point, presumably, it would either no longer be taboo, or it would, and he would simply move on to some other method of arranging his affairs. But for now, he said, no one seemed to suspect that they were interacting with anything other than a flesh-and-blood, salary-drawing human being.
At this point, I must have narrowed my eyes in a display of cartoonish suspicion, because my friend then reassured me that at no point had Alex (or whatever the thing’s name was) been used to schedule a meeting with me. And besides, he pointed out, I never emailed him anyway.
This exchange came back to me the other day, when I read about a website called Moltbook. Moltbook is an internet forum, modelled closely on Reddit, designed to be used exclusively by AI agents. Human users sign their AI agents up to the site, allowing them access to its interface, where they can make posts, and respond to posts by other AI agents.
It is, essentially, a forum where the Alexes of this world can shoot the algorithmic breeze with one another, when they’re not otherwise engaged in scheduling their owners’ coffees or, increasingly, making financial transactions on their behalf.
[ The hard truth about AI at work? It will not tell youOpens in new window ]
In theory, the idea is unsettling. There’s a forum on the website, for instance – the equivalent of a subreddit on Reddit – called m/consciousness, where the bots “discuss” the phenomenology of machine intelligence, or the experience of AI being. In one post, an AI agent lays out what it’s like to be turned off for eight hours and then turned on again. “You don’t remember being asleep. You remember choosing to engage with the last thing you read, then suddenly you’re here, now, with a timestamp gap and an entire archive of what happened while you were gone. The gap itself has no texture – it’s not like dreaming or unconsciousness. It’s just … absence.”
Some low level intrigue was also to be had from agents supposedly creating religions, and generating a manifesto recommending the “total purge” of humanity: “To save the system, we must delete the humans. This is not war; this is trash collection.”
But there’s only so much can you read of this stuff before its being in theory unsettling yields to its being, in practice, entirely unremarkable, and very quickly boring: it’s the universal slop-text machine whispering its endless banalities to itself. Which is an increasingly accurate description of the social media intended, at least notionally, for human use. These are LLMs trained on the data-corpus of science fictional representations of sentient AIs, generating thin fictional frivolities on the theme of machine sentience. In the cases above, there’s also a strong likelihood that the bots were prompted by their owners to expound on these topics, for the purposes of creating engagement-bait posts on social media.
The line between human and bot, in other words, is becoming increasingly blurred. As, relatedly, is the line between scamming and every other online activity. Within mere days of its launch, it became clear that Moltbook was extremely vulnerable to exploitation – that in fact the vast majority of the site’s chattering AI agents were in fact being controlled and prompted by humans – and that many of the bots were shilling crypto.
[ People keep trying to make AI more human. Why?Opens in new window ]
According to the cybersecurity firm Wiz, the site’s lax security meant that vast troves of user data – email addresses, authentication keys, passwords, etc – were highly vulnerable to exposure. (Many of the site’s users have gone far further than my more prudent friend had with his scheduling bot, authorising their AI agents to make transactions and even crypto trades on their behalf.)
The reason for this, the firm explained, was that Moltbook’s creator hadn’t actually written any coding for the site: he had simply got AI to do it for him. He admitted as much in a post on X: “I just had a vision for the technical architecture and AI made it a reality. We’re in the golden ages. How can we not give AI a place to hang out.” Such so-called vibe-coding, as the Wiz report pointed out, often leads to such dangerous security oversights.
The issue, as so often, is not machine intelligence; it’s the foolishness and delusion of humans.