{"id":208220,"date":"2026-04-24T11:03:41","date_gmt":"2026-04-24T11:03:41","guid":{"rendered":"https:\/\/www.newsbeep.com\/us-ny\/208220\/"},"modified":"2026-04-24T11:03:41","modified_gmt":"2026-04-24T11:03:41","slug":"how-ai-is-creeping-into-the-new-york-times","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us-ny\/208220\/","title":{"rendered":"How AI Is Creeping Into The New York Times"},"content":{"rendered":"<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">On Sunday, a writer named Becky Tuch <a data-event-element=\"inline link\" href=\"https:\/\/x.com\/BeckyLTuch\/status\/2035700155953893673\" rel=\"nofollow\">posted an excerpt<\/a> on X from a months-old New York Times <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2025\/11\/21\/style\/modern-love-unfit-to-be-a-mother.html\" rel=\"nofollow noopener\" target=\"_blank\">\u201cModern Love\u201dcolumn<\/a> that had given her pause. \u201cI don\u2019t want to falsely accuse writers\u201d of using AI, she wrote. \u201cBut this reads EXACTLY like AI slop.\u201d The excerpt\u2014from an essay by a mother who had lost custody of her son\u2014described the son\u2019s feelings, at one point, toward his mother: \u201cNot hate. Not anger. Just the flat finality of a heart too tired to keep trying.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Among the 100-plus replies to Tuch\u2019s post was <a data-event-element=\"inline link\" href=\"https:\/\/x.com\/TuhinChakr\/status\/2035742514293129375\" rel=\"nofollow\">one by an AI researcher<\/a>, Tuhin Chakrabarty. He\u2019d run the snippet from \u201cModern Love\u201d through an AI-detection tool from the start-up Pangram Labs, which flagged it as likely having been AI-generated.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">I learned about the incident from Chakrabarty, a computer-science professor at Stony Brook University. I\u2019d previously written about his efforts to quantify the proliferation of AI in novels self-published on Amazon. After commenting on Tuch\u2019s post, he plugged the whole column into the Pangram AI detector. The program estimated that more than 60 percent of it was AI-generated. I ran the column through four other AI-detection tools: Two of them flagged 30 percent of the work as likely AI-generated, one found no AI, and one suspected AI but offered no percentage.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Kate Gilgan, the author of the column, told me that she hadn\u2019t copied and pasted language from an AI model into her work. \u201cHowever, I did utilize AI as a tool,\u201d she added, seeking \u201cinspiration and guidance and correction.\u201d She said she\u2019d prompted various products (including ChatGPT, Claude, Copilot, Gemini, and Perplexity) to help her stay on topic in a paragraph, for example, or stick to a theme. \u201cI used AI as a collaborative editor and not as a content generator,\u201d she said. In response to questions about the column, a New York Times spokesperson noted that the paper\u2019s contracts require freelancers to abide by its <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/editorial-standards\/ethical-journalism.html\" rel=\"nofollow noopener\" target=\"_blank\">ethical-journalism handbook<\/a>, which mandates that AI use \u201cadhere to established journalistic standards and editing processes\u201d and that \u201csubstantial use of generative A.I.\u201d be clearly disclosed to readers. Asked for comment on whether Gilgan\u2019s AI use rose to the level requiring disclosure, the spokesperson said in an email: \u201cJournalism at The Times is inherently a human endeavor. That will not change. As technology evolves, we are consistently assessing best practices for our newsroom.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Whatever the extent of Gilgan\u2019s dependence on AI\u2014detection tools are imperfect\u2014her acknowledgment is the latest evidence of a phenomenon that people have been whispering about online for a long time: Artificial intelligence has already infiltrated prestigious media outlets and publishing houses. Last week, Hachette made national headlines when it decided to cancel the publication of a novel, Shy Girl, that appeared to include AI-generated text, which readers had identified ahead of its American release. (The novel had previously been published in the United Kingdom and is now being discontinued there. The author <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2026\/03\/19\/books\/shy-girl-book-ai.html\" rel=\"nofollow noopener\" target=\"_blank\">told the Times<\/a> that she had not used AI to write Shy Girl, but that an acquaintance who\u2019d edited an earlier version of the novel had done so.) Last spring, the Chicago Sun-Times and The Philadelphia Inquirer were caught <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/05\/ai-written-newspaper-chicago-sun-times\/682861\/\" rel=\"nofollow noopener\" target=\"_blank\">publishing a syndicated summer-reading guide<\/a> featuring nonexistent novels; a freelancer had made it using ChatGPT. Besides those high-profile incidents, people have been posting for months about suspicions of AI turning up, undisclosed, in major news publications\u2014far beyond personal essays or puffy summer features.<\/p>\n<p id=\"injected-recirculation-link-0\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 1\" data-event-element=\"injected link\" data-event-position=\"1\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/05\/ai-written-newspaper-chicago-sun-times\/682861\/\" rel=\"nofollow noopener\" target=\"_blank\">Read: At least two newspapers syndicated AI garbage<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">A note of caution: One challenge with AI detection is that the tools involved, much like the models they analyze, are still evolving. Sometimes they flag false positives or fail to catch AI-generated material. Pangram\u2019s CEO, Max Spero, acknowledged that both happen. He also warned that the percentage of AI material in a text is difficult to estimate with certainty; an article riddled with AI tells could be flagged as fully AI-generated even if it also includes some human-written text. Different detection tools give varying results.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Jenna Russell, a doctoral candidate in computer science at the University of Maryland, has been following various social-media firestorms. Often, someone will paste a screenshot from a work that they suspect contains AI material, a commenter will run it through an AI detector and post the results, others will pile on to express outrage, and then everyone will just move on. Wondering how common AI use really was, Russell and six other researchers set Pangram on thousands of articles, and found that it flagged likely AI use across the U.S. press\u2014including in the opinion sections of The New York Times, The Wall Street Journal, and The Washington Post\u2014suggesting that writers are turning to AI more than their readers might believe. (Although the researchers focused on opinion articles in the big publications, they also studied a small number of their news stories; among those, far fewer were flagged for AI-like language.) In October, Russell and her colleagues published <a data-event-element=\"inline link\" href=\"https:\/\/arxiv.org\/abs\/2510.18774\" rel=\"nofollow noopener\" target=\"_blank\">a preprint<\/a> of their research, which is not yet peer-reviewed; several Pangram researchers, including Spero, are co-authors.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">All three of those national newspapers have posted information about their AI policies, noting that they permit some use but prioritize being transparent about it. A spokesperson for the Journal\u2019s parent company, Dow Jones, declined to comment for this article. (I\u2019m a former Journal reporter and have also written and edited for the Times on a freelance basis.) In response to questions about its stories, a spokesperson for the Post said, \u201cOur editing process includes working to establish the authenticity of everything we publish.\u201d (The Post also creates <a data-event-element=\"inline link\" href=\"https:\/\/helpcenter.washingtonpost.com\/hc\/en-us\/articles\/44243916498587-Your-Personal-Podcast\" rel=\"nofollow noopener\" target=\"_blank\">AI-generated podcasts<\/a>, so it isn\u2019t entirely clear what their definition of authenticity is.)<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The Post had tested three articles I asked about and told me that it had found lower AI likelihood through Pangram than the researchers did; one ranked as \u201cfully human written.\u201d Other detection tools suspected even less AI use in most cases. Spero told me that the current iteration of Pangram, which the Post used, was designed to be more conservative than the previous version (used in their research) in flagging material as AI-generated, partly for fear of spreading false accusations. But he also said that when he and Russell reran their data set of opinion articles through the current version, the underlying assessments were similar to those in the earlier iteration, including with regard to the Post. (Chakrabarty checked the \u201cModern Love\u201d column with the current version of Pangram.)<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Regardless of the exact numbers, the fact remains: Some of the most trusted publications in the United States have been publishing opinions\u2014under real people\u2019s names\u2014that appear to include text generated with AI models. As AI slop has become a fixture of all kinds of online spaces\u2014our internet searches, our social-media feeds, our online bookstores\u2014major newspapers have been seen by many as a protected space, in which AI-generated content would rarely (or never) appear undisclosed. The newspapers that have survived the onslaught of the internet have benefited from the shared assumption that they can be trusted. The stakes of a broken social contract could not be higher, and they go far beyond the risk of a <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/health\/archive\/2021\/03\/what-pandemic-doing-our-brains\/618221\/\" rel=\"nofollow noopener\" target=\"_blank\">smooth-brained<\/a> writing style.<\/p>\n<p id=\"injected-recirculation-link-1\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 2\" data-event-element=\"injected link\" data-event-position=\"2\"><a href=\"https:\/\/www.theatlantic.com\/ideas\/2026\/03\/ai-job-loss-jevons-paradox\/686520\/\" rel=\"nofollow noopener\" target=\"_blank\">Read: How to guess if your job will exist in five years<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">When opinion articles or personal essays are published in major papers\u2014sometimes with big names attached to them\u2014they can influence societal beliefs and, in turn, the policies of governments or corporations. It has seemed fair to assume, historically, that those opinions reflect the voices and beliefs of the individuals whose names are attached to them. But AI language is something else entirely. Research has found that AI output is <a data-event-element=\"inline link\" href=\"https:\/\/gizmodo.com\/researchers-say-ai-is-homogenizing-human-expression-and-thought-2000732610\" rel=\"nofollow noopener\" target=\"_blank\">much more homogenous<\/a> than human language. Major AI companies have also acknowledged that their models can be skewed\u2014for example, toward certain cultural and political beliefs. Analyses of the Grok chatbot <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2025\/09\/02\/technology\/elon-musk-grok-conservative-chatbot.html\" rel=\"nofollow noopener\" target=\"_blank\">have found<\/a> that its language often mimics that of the man behind its development, Elon Musk.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Multiple studies, including those from AI companies themselves, also demonstrate that AI output is <a data-event-element=\"inline link\" href=\"https:\/\/www.technologyreview.com\/2025\/05\/19\/1116779\/ai-can-do-a-better-job-of-persuading-people-than-we-do\/\" rel=\"nofollow noopener\" target=\"_blank\">unusually<\/a> <a data-event-element=\"inline link\" href=\"https:\/\/www.ox.ac.uk\/news\/2025-12-11-study-reveals-how-conversational-ai-can-exert-influence-over-political-beliefs\" rel=\"nofollow noopener\" target=\"_blank\">persuasive<\/a>, to the point of getting people to change their minds about political issues or candidates. A world where some self-published romance novels include synthetic turns of phrases and plot points is upsetting. One where AI models\u2019 language and perspectives creep, undisclosed, into the pages of major newspapers\u2014and therefore into public life\u2014is terrifying.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The good news is that we can do something about this. Publications can design clear policies about AI use and disclosure and require that staffers and freelancers abide by them, including by explicitly listing the requirements in contracts. This isn\u2019t a stretch: Many contracts require, for example, that contributors promise not to plagiarize. (The Atlantic requires contributors to attest to being \u201cthe sole author\u201d of their article, and forbids AI-generated writing or imagery without approval and disclosure.)<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In addition, editors could receive training in identifying AI tells by sight; they could also use detection products. Then they could follow up with writers whose work raises questions (while avoiding jumping to conclusions based only on an editor\u2019s suspicions or a software scan). Those who violate a publication\u2019s policies could face legal or other penalties; as with plagiarizing, using AI without disclosing it would incur significant social and professional costs. Governments, too, could enact policies to rein in failures of disclosure: Legislators could legally require it in certain contexts, for example, though enforcement would surely raise free-speech challenges.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Another remedy could be for major AI companies to take some responsibility for the problem by \u201cwatermarking\u201d their products\u2019 output, making it easier to spot. The Journal <a data-event-element=\"inline link\" href=\"https:\/\/www.wsj.com\/tech\/ai\/openai-tool-chatgpt-cheating-writing-135b755a\" rel=\"nofollow noopener\" target=\"_blank\">reported<\/a> in 2024 that OpenAI had built a tool that could detect AI text with up to 99.9 percent certainty, but hadn\u2019t released it; one apparent factor, according to the Journal, was a survey in which some users \u201csaid they would use ChatGPT less if it deployed watermarks and a rival didn\u2019t.\u201d Asked for comment, an OpenAI spokesperson shared a <a data-event-element=\"inline link\" href=\"https:\/\/openai.com\/index\/understanding-the-source-of-what-we-see-and-hear-online\/\" rel=\"nofollow noopener\" target=\"_blank\">blog post<\/a> pointing out other obstacles; \u201cbad actors\u201d could circumvent it, for example. When I asked Chakrabarty about watermarking, he noted the technical difficulties but also raised a more existential question: \u201cWhy would Anthropic or OpenAI do it, when the whole business model is based on convincing people AI language is humanlike?\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"On Sunday, a writer named Becky Tuch posted an excerpt on X from a months-old New York Times&hellip;\n","protected":false},"author":2,"featured_media":208221,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[9,11,10],"class_list":{"0":"post-208220","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-new-york","8":"tag-new-york","9":"tag-new-york-headlines","10":"tag-new-york-news"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/posts\/208220","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/comments?post=208220"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/posts\/208220\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/media\/208221"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/media?parent=208220"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/categories?post=208220"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/tags?post=208220"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}