{"id":589848,"date":"2026-04-17T13:53:09","date_gmt":"2026-04-17T13:53:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/589848\/"},"modified":"2026-04-17T13:53:09","modified_gmt":"2026-04-17T13:53:09","slug":"a-new-kind-of-scandal-is-growing-online-and-aimed-at-the-wrong-target","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/589848\/","title":{"rendered":"A new kind of scandal is growing online\u2014and aimed at the wrong target."},"content":{"rendered":"<p class=\"slate-paragraph slate-graf\" data-word-count=\"21\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zqcny000w3b7c8rscxpiz@published\"><a href=\"https:\/\/slate.com\/theslatest?utm_source=slate&amp;utm_medium=article&amp;utm_campaign=article_plain_text_topper&amp;sailthru_source=Article-TopperText-CTA\" rel=\"nofollow noopener\" target=\"_blank\">Sign up for the Slatest<\/a> to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"99\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zkrzi001wjgm5npryjpak@published\">Over the past month, A.I. detection has been at the center of a series of controversies: <a href=\"https:\/\/www.nytimes.com\/2026\/03\/19\/books\/shy-girl-book-ai.html\" rel=\"nofollow noopener\" target=\"_blank\">Hachette pulled<\/a> the horror novel Shy Girl by Mia Ballard after detectors flagged it as substantially A.I.-generated. The New York Times <a href=\"https:\/\/www.theguardian.com\/books\/2026\/mar\/31\/the-new-york-times-drops-freelance-journalist-who-used-ai-to-write-book-review\" rel=\"nofollow noopener\" target=\"_blank\">cut ties with<\/a> a freelance book critic who admitted that an A.I. editing tool had regurgitated passages from a Guardian article into his draft. The <a href=\"https:\/\/www.theatlantic.com\/culture\/2026\/03\/how-ai-creeping-new-york-times\/686528\/\" rel=\"nofollow noopener\" target=\"_blank\">Atlantic reported that<\/a> a \u201cModern Love\u201d column had been flagged as more than 60\u00a0percent A.I.-generated. In certain corners of social media, A.I.-detector screenshots are shared like mug shots, and pile-ons have the grim energy of public stonings.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"94\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8c9001v3b7cvz0o7uc0@published\">This may all seem understandable\u2014people want to know if what they\u2019re reading was generated by a bot, and some argue <a href=\"https:\/\/www.thedailybeast.com\/princeton-student-edward-tian-built-gptzero-to-detect-ai-written-essays\/\" rel=\"nofollow noopener\" target=\"_blank\">they deserve to know<\/a>. However, such controversy narrows the issue of A.I.\u2019s steady encroachment to one of process, rather than impact. Drawing a red line around using chatbots to generate prose may make it easier to ignore the way that the technology may be shaping writing before one even types a single word. And a culture of callouts, scandals, and fear may prevent media and publishing from wrestling with much thornier questions of authorship.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"114\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8ca001w3b7cjcyq6n48@published\">At the center of many of these controversies is a company called Pangram, whose CEO, Max Spero, has become the go-to authority when A.I. authorship disputes erupt. On Twitter\/X, where Spero calls himself a \u201cslop janitor,\u201d a user flagged a Guardian sports journalist\u2019s writing as A.I.-generated. The publication responded that this was \u201cthe same style he\u2019s used for 11 years writing for the Guardian, long before LLMs existed. The allegation is preposterous.\u201d Spero quote-tweeted the exchange with a Pangram time-series analysis of 871 articles by the journalist: \u201cIt\u2019s clear that he is increasingly relying on AI. In two weeks in February he churned out nine articles classified by Pangram as fully AI-generated. Receipts below.\u201d<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"74\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cb001x3b7ce6svhj0m@published\">Or take Pangram\u2019s appearance in the Shy Girl cancellation. Readers on Reddit and YouTube had been flagging the horror novel as suspiciously A.I. for months, but then Spero ran the full manuscript and posted the result (78\u00a0percent A.I.-generated). Hachette pulled the book the day the Times piece ran. A story in the Atlantic soon followed. Spero was on LinkedIn, <a href=\"https:\/\/www.linkedin.com\/feed\/update\/urn:li:activity:7442967431586709504\/\" rel=\"nofollow noopener\" target=\"_blank\">urging publishers<\/a> to \u201cstrictly moderat[e] AI generated content\u201d and \u201cdraft and enforce robust AI-use policy.\u201d<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"77\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cb001y3b7c61akc7o4@published\">A pattern emerges: The crowd suspects a problem, then Pangram validates the suspicion, stokes the mob, and sells the solution. The impulse to dismiss all this as a detector company drumming up business runs into an issue\u2014Pangram actually works way better than you might think. Brian Jabarian, a University of Chicago <a href=\"https:\/\/ssrn.com\/abstract=5407424\" rel=\"nofollow noopener\" target=\"_blank\">economist who conducted<\/a> a rigorous independent evaluation of A.I. detectors, told me flatly, \u201cThis narrative that we shouldn\u2019t use A.I. detection doesn\u2019t seem to hold anymore.\u201d<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"94\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cb001z3b7cl7gryvm2@published\">Jabarian\u2019s preprint, co-authored with Alex Imas and with no disclosed financial ties to the company, tested the tool across nearly 2,000 passages and found near-zero false-positive and false-negative rates on medium-to-long texts, the length of a typical op-ed or a verbose Amazon review. Independent benchmarks confirm that Pangram <a href=\"https:\/\/aclanthology.org\/2025.genaidetect-1.45\/\" rel=\"nofollow noopener\" target=\"_blank\">outperforms every other detector<\/a> tested and <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3720553.3746665\" rel=\"nofollow noopener\" target=\"_blank\">is robust<\/a> against \u201chumanizers,\u201d or software designed to smuggle A.I. text past detectors. So when Spero posts a time-series chart of hundreds of articles showing when a journalist\u2019s output started sounding fishily like ChatGPT, I am inclined to believe it.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"111\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cb00203b7czwvt7ruu@published\">That A.I. detection is finally catching up is, on balance, a Good Thing. A.I.-generated articles already <a href=\"https:\/\/graphite.io\/five-percent\/more-articles-are-now-created-by-ai-than-humans\" rel=\"nofollow noopener\" target=\"_blank\">far outnumber<\/a> human ones. Social media is flooded with low-effort slop. According to Pangram\u2019s own research, a <a href=\"https:\/\/www.pangram.com\/blog\/pangram-predicts-21-of-iclr-reviews-are-ai-generated\" rel=\"nofollow noopener\" target=\"_blank\">fifth of peer reviews<\/a> submitted to the A.I. research conference ICLR are fully A.I.-generated, and <a href=\"https:\/\/arxiv.org\/pdf\/2510.18774\" rel=\"nofollow noopener\" target=\"_blank\">9\u00a0percent of American newspapers<\/a> contain undisclosed bot use. In this A.I.-powered asphyxiation of the information ecosystem, Spero has positioned himself on social media as a folk hero hauling in the oxygen tanks. You can tag his company\u2019s bot on Twitter\/X, and it will tell you whether a post is A.I. On Spero\u2019s social media to-do list: a \u201cslop hunter of the week leaderboard.\u201d<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"60\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cc00213b7chuosghvd@published\">Pangram may be great for A.I. slop, but its performance probably varies in the wild. \u201cIf you copy-paste chunks of ChatGPT with minimal edits, then Pangram is fairly accurate,\u201d Tuhin Chakrabarty, an assistant professor of computer science at SUNY, told me. \u201cIf you significantly edit an A.I.-generated text, then it becomes human, and this is a harder problem in general.\u201d<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"106\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cc00223b7cpiqujljz@published\">That matters because in the real world, A.I. use comes in a spectrum. In Pangram\u2019s newspaper study, for example, 86.5\u00a0percent of the chatbot use detected in opinion pieces at the Times, the Wall Street Journal, and the Washington Post was classified as \u201cmixed,\u201d or some unknown entanglement of human and machine. Did the writer use Claude to help with transitions? Or generate an opinion piece fully formed from ChatGPT and slap a name on it? These distinctions matter. Pangram\u2019s latest version now outputs scores on a continuum and is making genuine progress on these gray-zone instances, but those cases remain far less validated than the extremes.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"72\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cd00233b7caysgf5gu@published\">To complicate matters further, not everyone may equally bear the burden of false accusations. Liam Dugan, a Ph.D. student at the University of Pennsylvania whose dissertation focuses on A.I. detection and <a href=\"https:\/\/aclanthology.org\/2025.genaidetect-1.45\/\" rel=\"nofollow noopener\" target=\"_blank\">has benchmarked<\/a> the major commercial detectors, told me: \u201cFor most people, they might never, ever get a false positive. And for other people, the false positives are sort of disproportionately allocated on them because they just happen to write like A.I.\u201d<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"64\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cd00243b7czsiymswl@published\">Some A.I. detectors are <a href=\"https:\/\/arxiv.org\/pdf\/2304.02819\" rel=\"nofollow noopener\" target=\"_blank\">more likely to flag<\/a> non-native speakers of English. (According to a Pangram blog post, the company has <a href=\"https:\/\/www.pangram.com\/blog\/how-accurate-is-pangram-ai-detection-on-esl\" rel=\"nofollow noopener\" target=\"_blank\">largely solved this problem<\/a>, but there is no independent audit of this assertion.) Apart from non-native speakers, there may be other subgroups of writers whose prose has the focus-grouped sheen of ChatGPT output. Opinion writing in major newspapers, in fact, comes to mind.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"113\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cd00253b7co3c2yb3e@published\">Not only does A.I. keep improving, but <a href=\"https:\/\/www.vice.com\/en\/article\/youre-not-imagining-it-people-actually-are-starting-to-talk-like-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">humans are also beginning to speak and write like A.I.<\/a>, narrowing the gap that detectors rely on to make their calls. This makes A.I. detection inherently an arms race, so the performance of any given detector will likely fluctuate over time. Academics I spoke with all emphasized that the state of A.I. detection is much better today than it was in 2023 but cautioned against letting the narrative pendulum swing too far in the other direction. Jabarian told me, \u201cMaybe we went from a world where people were not using detection because it was so bad, and now maybe people think it works all the time.\u201d<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"81\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8ce00263b7ccau8ujc9@published\">And when the technicalities of A.I. detection collide with cancel culture, it does not lead anywhere productive. Take a dustup Spero found himself in a few weeks ago with the Wall Street Journal. Pangram\u2019s newspaper study identified specific op-ed writers, including three at the Journal. James Taranto, who edits those pages, responded with <a href=\"https:\/\/www.wsj.com\/opinion\/the-ai-detector-as-defamation-machine-8ba298f0\" rel=\"nofollow noopener\" target=\"_blank\">a combative piece<\/a>. He ran the flagged articles through Pangram and got different scores, contacted the accused writers, and concluded that the accusations of \u201cA.I.-generated\u201d didn\u2019t hold up.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"134\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8ce00273b7ckelgrbg9@published\">The response is instructive for what it pursued and what it avoided. Taranto investigated Pangram\u2019s consistency and found enough variation to dismiss it. But quibbling over individual op-eds let him sidestep some uncomfortable introspection. Even if Pangram misfires on a given op-ed, the study\u2019s broader pattern\u2014that A.I. use is showing up across major newspaper opinion pages, his own included\u2014is impossible to argue with. He did not have to ask how his editorial oversight had failed to spot a discomfiting level of undisclosed A.I. use. Basically, he wrote a hit piece on the thermometer instead of asking why he had a fever. This incident reveals why chatbot callout culture leads nowhere. Spero called out Taranto; Taranto called out Spero. Nothing changed. \u201cI think it may have been a mistake to name names,\u201d Spero told me.<\/p>\n<p>    <a href=\"https:\/\/slate.com\/technology\/2026\/04\/ai-online-writing-workshops-communities.html\" class=\"recirc-line__content\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>          <img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2026\/04\/1776433989_836_f8822e94-62fe-4bed-b228-c63a2795d17e.jpeg\" width=\"141\" height=\"94\"   alt=\"\" loading=\"lazy\"\/><\/p>\n<p>\n          Ash Jurberg<br \/>\n        They Were Once Essential to So Many Writers. Now They\u2019re Quietly Vanishing Across the Internet.<br \/>\n        Read More\n      <\/p>\n<p>    <\/a><\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"115\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cf00283b7czablk1r1@published\">But the larger issue may be that when it comes to A.I.-assisted writing, red lines perhaps are being drawn around the wrong thing. On Substack recently, Nicholas Thompson, the CEO of the Atlantic, <a href=\"https:\/\/substack.com\/@nxthompson\/note\/c-231144710\" rel=\"nofollow noopener\" target=\"_blank\">shared a writer\u2019s account<\/a> of using Claude to build a custom editing rubric while instructing the A.I., \u201cYou are not a co-writer.\u201d Thompson called it \u201ca cool example of how you can use A.I. to help your writing\u2014without relying on it for any actual writing.\u201d Elsewhere, <a href=\"https:\/\/depthperception.longlead.com\/p\/nicholas-thompson-atlantic-book-running-father\" rel=\"nofollow noopener\" target=\"_blank\">he has deemed<\/a> the practice of using chatbots to generate sentences \u201cunethical and wrong.\u201d This is becoming the standard position: A.I. for everything upstream of prose is acceptable; A.I. for the prose itself is a betrayal.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"169\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8cf00293b7c34hxfd7l@published\">To understand why this is a problem, consider a simple exercise I ask reporters to perform when I run workshops on journalism and A.I. I hand each reporter one of two A.I.-generated research reports on collagen supplements\u2014same underlying studies, same data, different framing. Report\u00a0A opens with positive clinical findings and mentions industry funding as a limitation. Report\u00a0B opens with the funding-bias analysis and loudly labels which results are industry funded. Report\u00a0A primes a \u201cDoes collagen work?\u201d story. Report\u00a0B primes a \u201cWhy you can\u2019t trust collagen research\u201d story. To be clear, both are reasonable reads of the literature, but the reporters would write different stories because of how the A.I. decided to order the same information. In each case, their writing would sail through a detector. Meanwhile, a reporter who did her own research but asked an A.I. to clean up her prose might get flagged. She was arguably the least influenced of the three, yet the moral intuitions of writers seem to be that she most betrayed the craft.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"116\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8ch002a3b7c8qz8ocbu@published\">I understand that moral intuition and agree that passing off A.I.-generated prose as one\u2019s own breaks the writer-reader contract. (I also acknowledge the aesthetic and moral revulsion of generative A.I., full stop.) But what many in journalism seem most concerned about is what\u2019s at stake in newsmaking, that the perspectives of A.I. models shape writing and thus public opinion. <a href=\"https:\/\/www.theatlantic.com\/culture\/2026\/03\/how-ai-creeping-new-york-times\/686528\/\" rel=\"nofollow noopener\" target=\"_blank\">A recent piece<\/a> in Thompson\u2019s Atlantic called this prospect \u201cterrifying\u201d and proposed a suite of solutions: disclosure policies, editor training on A.I. tells, detection software, penalties for violators. Every recommendation targets prose, while the upstream-influence problem, something the author herself seemed most concerned with, received no actionable attention at all. It\u2019s all about process, not about impact.<\/p>\n<p>          <a href=\"https:\/\/slate.com\/technology\/2026\/04\/ai-allbirds-pivot-silicon-valley.html\" class=\"in-article-recirc__link\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>            They\u2019re Calling It the Saddest Business Pivot of All Time<br \/>\n          <\/a><\/p>\n<p>          <a href=\"https:\/\/slate.com\/technology\/2026\/04\/plastic-detox-netflix-joe-rogan-penis-size.html\" class=\"in-article-recirc__link\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>            A Netflix Documentary Has a Bizarre Idea About How to Get Pregnant. I\u2019ve Watched My Patients Fall for It.<br \/>\n          <\/a><\/p>\n<p>          <a href=\"https:\/\/slate.com\/technology\/2026\/04\/ai-writing-detectors-scandal-shy-girl.html\" class=\"in-article-recirc__link\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>            A New Kind of Scandal Is Growing Online. It\u2019s Ruining Careers\u2014and Aimed at the Wrong Target.<br \/>\n          <\/a><\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"96\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8ci002b3b7czkkiccpx@published\">If the goal is writerly independence, then drawing the red line at writing protects the independence of the final product far less than one might hope. Rather, it protects a feeling of independence. And to be clear, I\u2019m not saying that relying on A.I. for research is even bad. Yes, chatbots <a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.aea3884\" rel=\"nofollow noopener\" target=\"_blank\">are unusually persuasive<\/a>, and <a href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.adw5578\" rel=\"nofollow noopener\" target=\"_blank\">writers pick up<\/a> model biases without even knowing it, but the baseline isn\u2019t some platonic ideal of a perfectly objective journalist. The question is, how does A.I.-based research either reinforce or counteract the biases in information-gathering processes that journalists already use?<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"86\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmo1zv8ci002c3b7cn59ibha0@published\">And Thompson\u2019s red line happens to align perfectly not only with what companies like Pangram can now measure and sell, but also with what vigilantes can police on social media. This troubles me because A.I. detection, even at its best, is going to be a moving target. Build a culture of accusation on that foundation, and you get something not only brittle but perhaps even falsely reassuring: a system that comforts readers and writers that the A.I. problem has been solved while harder questions go unasked.<\/p>\n<p>          <img alt=\"\" class=\"newsletter-signup__img\" hidden=\"\" data-src-light=\"https:\/\/dot.cdnslate.com\/static\/media\/components\/newsletter-signup\/the-slatest.49f353b.png\" data-src-dark=\"https:\/\/dot.cdnslate.com\/static\/media\/components\/newsletter-signup\/the-slatest-dark.ca73d21.png\" width=\"130\" height=\"58.7\"\/><\/p>\n<p>      Sign up for Slate&#8217;s evening newsletter.<\/p>\n","protected":false},"excerpt":{"rendered":"Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to&hellip;\n","protected":false},"author":2,"featured_media":589849,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,223,174,9667,108,74],"class_list":{"0":"post-589848","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-books","12":"tag-internet","13":"tag-journalism","14":"tag-social-media","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/589848","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=589848"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/589848\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/589849"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=589848"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=589848"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=589848"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}