Illustration: New York Magazine/Getty
When Jared Hewitt’s co-worker claimed last winter that Hewitt used AI to write an incident report, she did it publicly. “And I work at a day care, so she was berating me in front of children,” he says. The co-worker read the document out loud, pointing to the words juxtaposition and circumstantial as evidence of a machine-generated influence. “I don’t write in a casual way but a much more serious, precise way,” he says. “And I’ve paid the price for living in a ChatGPT society.”
It wasn’t the first time Hewitt’s prose has been pegged as AI, and he thinks he knows why. He has a stutter, and when he’s typing, he can speak uninterrupted. It is a luxury he takes full advantage of: “Once I start writing, I can’t really stop.” Like a chatbot, he goes long. He adds paragraph breaks for posts on Reddit and peppers in research, even when the subject is mundane — say, the actress Willa Fitzgerald’s role in the low-budget 2024 thriller Strange Darling. (“Between Strange Darling and newer projects like A House of Dynamite and Regretting You, her career feels like it’s steadily expanding,” he wrote in a post that one commenter complained was AI generated, “and I have no doubt in my mind that she’ll eventually land the role that finally pushes her fully into the awards circuit, whether in film or television.”) Hewitt is also neurodivergent. “Growing up, I had a strong obsession with writing,” he says. He was always given good grades in English, but now, with the massive uptick in AI-generated text, all the time he spent happily working to improve his prose strikes him as a liability.
There’s a new entity among us, and it’s getting better at disguising itself — but it is becoming “almost too human to be credible,” as one character says about a possible robot in the Isaac Asimov short story “Evidence.” The mood is paranoid: This presence is producing a gigantic amount of language, much of it filtered through people we know, whether they’re using it for Hinge messages or LinkedIn posts (or texts from your mother on the morning of your divorce). Last week, Hachette became the first major publisher to cancel the planned publication of a book, the horror novel Shy Girl, over suspected AI use, prompting authors online to spiral about whether their own work could carry some whiff of LLM. After all, because we humans are natural parrots, ChatGPT may be changing our vocabulary possibly even if we’ve never used it at all. But people — real people — are still writing all kinds of things too: fantasy novels, short stories, fan fiction, Reddit comments, Wikipedia pages, Fragrantica perfume reviews. Emails and incident reports and legal briefs, many of which could have been done with the help of ChatGPT or some other AI writer, yet for whatever reason were not.
The effect is that everyone is trying to figure out who is LLM and who is human. Sometimes, we are getting it wrong. “People are going off vibes,” says the historical novelist Kerry Chaput, who was horrified when a reader thought a social-media post she wrote about her neurogenic cough was ChatGPT generated. (“Stifling my voice created real, physical damage,” it read in part. “It shows how deeply we all need to feel the power of speaking our truth.”) As a genre writer, she was especially unsettled by the accusation. Authors of romance and fantasy and historical fiction “are always getting attacked,” she says. “There are word-count conventions, there are sentence conventions. There are rules to writing that we all follow.” How can she prove that the formulas she follows predate the ones ChatGPT adheres to so rigorously?
Chaput is not alone. Ines, a writer in Morocco, learned English as a third language and sometimes wonders if her attention to the rules of grammar has put her work at risk of being mistaken for something spun up by AI. “When I became freelance, I responded to an ad for a ghostwriter,” she says. “They asked me to write 3,000 words, and they gave me five days to finish it. I took my sweet time and I wrote it and I loved it. When I sent it, not even two minutes later, the person I interviewed with responded and told me I was using AI.” Ines isn’t sure why her writing sounded like it was generated by token predictions, but she has theories. Like her, ChatGPT often uses em dashes, and there’s a certain “pattern” both she and AI follow for readability, alternating short sentences with longer ones.
AI detectors have in fact been shown to be biased against non-native English writers. “The irony is maddening: You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake,” the Kenyan writer Marcus Olang’ said in a Substack post. Trained on a corpus of formal writing, ChatGPT, he thinks, “accidentally replicated the linguistic ghost of the British Empire” — the same ghost haunting the schools where he was drilled in the Queen’s English. There is what you might call a cleanliness penalty. The writers punished are the ones who have a knack for pristine grammar, so different from the clumsy-thumbed way most of us type.
Jason Bennett Thatcher, a business professor at the University of Colorado Boulder who was raised mostly across Asia, has noticed the bias too. “I go back to the books I learned to write English from, and the word choices they gave me are literally the word choices” that people associate with AI, he says — terms like boast and testament and foster. “So you have all these people coming from the Global South. They’re former Commonwealth countries. They use the same vocabulary that whoever’s coming up with these AI detectors is flagging as AI.” A journal turned down a paper he and some collaborators wrote, he says, in part because the editor believed they used ChatGPT to generate text. Most of the collaborators had learned English as a second or third language; they used AI to copyedit the work, but the words, Thatcher says, were their own.
The “AI accent” — a tonally even lilt that doesn’t stray into ums and likes — also has some overlap with what you might call a neurodivergent affect. “I’ve put my own writing into AI detectors, and I usually get between 40 to 60 percent AI,” says Hewitt, the day-care worker. “It shocks me because I can speak for myself. I’ve never once relied on AI for writing.” Carlos, a 24-year-old from Brazil, believes AI models and autistic people might have a similar media appetite, voraciously digesting large amounts of text. “Our social isolation — by choice or by exclusion — leads us to find alternative methods to emotionally connect to others,” he says, including deep immersion in comics or literature. (For him, it was an obsession with the Brazilian novelist, poet, and playwright Machado de Assis.) When he was repeatedly confronted on a Discord server for his suspiciously formal writing, “it became quite obvious to me that no matter what I said or what I showed as evidence, it wouldn’t be enough to satisfy some,” he says. Another autistic person I spoke to, a high-fantasy writer named Kari who has been accused multiple times of using AI, says she has loyal readers watch her writing sessions on a video chat. The idea is they could testify on her behalf if she is accused of writing with AI again.
Judy, an autistic kindergarten ESL teacher in Massachusetts, was accused by her principal in a meeting for supposedly using ChatGPT; the principal later apologized. Judy describes her writing tone as formal, and she avoids using emotional language. As she sees it, AI language sounds the way it does because it borrowed from autistic people first. “On the internet, if you look for things that were written to explain something clearly, a lot of the people who are able to really precisely and clearly explain something are neurodivergent people,” she says. “If ChatGPT is trawling the internet and scraping whatever it can find, it is emulating that style.” A big chunk of the internet, she argues, could very well be written by autistic writers. “A lot of my friends who are Wikipedia editors are people who have a huge passion for Star Wars, say, and they’re going to write a page about every single movie and check it regularly. And that’s their autism, but it’s also just their writing style.” Now it is also ChatGPT’s.
The particular snarl we’re in is new. But some people have been accused of sounding robotic for most of their lives. “I turned 63 this March, and literally this has been going on even before there was AI,” a handwriting instructor and remediation consultant I’ll call Sarah, who’s also autistic, tells me. “I was often accused of being some kind of robot that was running a program.” Talking to her, I’m not totally surprised: She speaks in complete paragraphs, articulates every -ing, and draws from a bundle of references including Germanic Viking runes and the Kurt Vonnegut story “Harrison Bergeron.” She has the loquaciousness of a large-language model, for better and worse.
Sarah’s social-media accounts are frequently banned; she’s tried different tactics for writing under different handles, but she usually ends up getting flagged. “I don’t know what I’m doing wrong,” she says. What we might call everyday English, with its sentence fragments and misplaced commas, is just not how she writes or talks, and though she’d be willing to adapt, she hasn’t figured out how. In the end, to her, all these people making false AI accusations seem to be enacting a simple category error: They sense that something is different, possibly “a little bit off,” but when they can’t figure out what it is, they decide they must be in the presence of a nonhuman author. “Imagine if a dog thought that a cat was a robotic dog because it didn’t quite act or look like a dog,” she says. “That’s the situation we’re in. I’m not a robotic dog; I’m a cat.”
Thank you for subscribing and supporting our journalism.
If you prefer to read in print, you can also find this article in the March 23, 2026, issue of
New York Magazine.
Want more stories like this one? Subscribe now
to support our journalism and get unlimited access to our coverage.
If you prefer to read in print, you can also find this article in the March 23, 2026, issue of
New York Magazine.