After feeding Han’s writing into GPT-4o, Chakrabarty fine-tuned fresh versions of the model on the work of twenty-nine other authors, including a close college friend of mine, Tony Tulathimutte. Jia Tolentino once praised Tony’s short stories, saying that his “deviant instincts crackle in nearly every line.” I’d been reading him since the early two-thousands—and yet his A.I. clone could have easily fooled me. Here’s a sample A.I.-generated line: “He finally counted 18 breaths, and, to delay longer, opened up a new doc and composed the marriage proposal he’d send to the first man to make him cum without dildos or videos.”
Chakrabarty had started his project out of intellectual curiosity, but he was growing disturbed by its implications. Pangram, an A.I.-detection program, failed to flag almost all of the prose generated by his fine-tuned models. This suggested that anyone with some storytelling skills could feed a plot into a fine-tuned chatbot, slap their name on the resulting manuscript, and try to publish it. People often minimize A.I.-generated literature—after all, we read books to access someone else’s consciousness. But what if we can’t tell the difference? When Chakrabarty returned from Japan, he invited Jane Ginsburg, a Columbia professor who specializes in copyright law, to join him and Dhillon as a co-author of a paper about the research. Ginsburg agreed. “I don’t know whether what I’m scared about is the ability to produce this content,” she told me, “or the prospect that this content could be really commercially viable.”
Chakrabarty, now a computer-science professor at Stony Brook University, recently released a preprint of the research, which has not yet been peer-reviewed. The paper notes that graduate students ultimately compared thirty A.I.-generated passages—one imitating each author in the study—with passages written by their colleagues. They weren’t told what they were reading; they were simply asked which they liked best. They preferred the quality of the A.I. output in almost two-thirds of the cases.
Reading the authors’ original passages alongside the A.I. imitations, I was startled to find that I liked some of the imitations just as much. The A.I. version of Han’s scene, about the newborn’s death, struck me as trite in places. But, to me, the line about the mother’s chant was more surprising and exact than the original. I also spotted some good bits in an imitation of Junot Díaz. In “This Is How You Lose Her,” Díaz writes, “The one thing she warned you about, that she swore she would never forgive, was cheating. I’ll put a machete in you, she promised.” To my ear, the A.I. rendition was more rhythmic and economical: “She told you from the beginning that if you ever cheated on her she would chop your little pito off.” I’d been studying Spanish for a couple of years, but I had to look up pito—a word for “whistle” that I hadn’t heard before. Google’s A.I. overview told me that, in some places, it was also slang for “penis.” Díazian enough, I figured.
When I wrote to the authors whose work was used in the study, most declined to be interviewed or didn’t respond. But a few e-mailed their thoughts. Lydia Davis wrote, “I think the point is certainly made, that AI can create a decent paragraph that might deceive one into thinking it was written by a certain human being.” Orhan Pamuk said, “I am sure soon there will be much more exact imitations.”
Díaz and Sigrid Nunez agreed to be interviewed. Over Zoom, I asked Díaz about chopping someone’s pito off. “Pito, of course, just means ‘whistle,’ ” he said, apparently perplexed. I told him that, according to the internet, it could also be a double-entendre. “My memory sucks, but, in all my years as a fucking Domincian in the diaspora, that is not a thing that I have ever heard,” he told me. He thought that his doppelgänger’s vernacular was geographically and historically incoherent. “I tend to write in a very specific time-stamped Jersey slang,” he said. Plus, he added, the A.I.’s rhythm and characterization were no good.
Nunez described her A.I. copycat as “completely banal.” “It isn’t my style, my story, my sensibility, my philosophy of life—it’s not me,” she told me. “It’s a machine that thinks that’s what I’m like.” When I pointed out that skilled graduate students had found the passage well written, she questioned whether they had paid close enough attention, suggesting that they’d made thoughtless judgments so that they could return to their own writing. (She didn’t like their imitations, either.) “If I thought this reflected anything that actually had to do with my work, I’d shoot myself,” Nunez said.