{"id":539284,"date":"2026-03-22T21:47:08","date_gmt":"2026-03-22T21:47:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/539284\/"},"modified":"2026-03-22T21:47:08","modified_gmt":"2026-03-22T21:47:08","slug":"a-year-ago-a-mathematician-bet-ai-wouldnt-affect-him-now-he-thinks-he-lost-that-bet","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/539284\/","title":{"rendered":"A year ago, a mathematician bet AI wouldn&#8217;t affect him, now he thinks he lost that bet"},"content":{"rendered":"<p>\n\t\t\t\t\tHe bet his field was safe from AI; a year later, he is bracing to pay up. What rattled him wasn&#8217;t a single dazzling proof, but a swelling tide of almost-right mathematics that could swamp how we tell truth from trickery.\t\t\t\t<\/p>\n<p>At the University of Toronto, mathematician Daniel Litt watched AI lurch from fumbling prompts in AI Dungeon to producing a tidy proof of Fermat\u2019s little theorem in just two years. In early 2025 he staked a 3-to-1 bet with Tamay Besiroglu that machines would still trail top mathematicians by 2030. Now he expects to lose. Beyond the wager, he fears models like ChatGPT and Claude could flood the field with plausible, unvetted results, turning verification into a Borges-style search for truth in an endless library.<\/p>\n<p>A mathematician\u2019s gamble on AI<\/p>\n<p>At the start of 2025, Daniel Litt, a University of Toronto mathematician, set a line in the sand. He bet that by 2030, artificial intelligence would still fall short of the best human minds in math. His confidence drew on years of underwhelming trials with systems like GPT-3. Yet the ground has shifted quickly, and his certainty has faded.<\/p>\n<p>A history of underestimation<\/p>\n<p>Litt\u2019s curiosity dates to 2020, when GPT-3\u2019s sudden fame pulled him in. Testing it through AI Dungeon, he asked for proofs and got fluff. Then, in 2022, a crack of light: GPT-3 produced a correct proof of the Little Fermat Theorem. The change was striking. It didn\u2019t make AI a mathematician, but it began to unsettle an otherwise sturdy skepticism (as he later reflected on his blog).<\/p>\n<p>A bet he might lose<\/p>\n<p>By March 2025, Litt formalized his view with a bet against Tamay Besiroglu, co-founder of Mechanize. He offered 3-to-1 odds that AI wouldn\u2019t autonomously produce research-level math, comparable to top 2025 papers, at a human-like cost by 2030. What happens when the odds you set turn against you? With rapid advances in ChatGPT and Claude, he now says he expects to lose.<\/p>\n<p>AI\u2019s threat to mathematical verification<\/p>\n<p>Litt worries less about competition and more about verification. Models can generate torrents of plausible mathematics, creating a Borges-like \u201cLibrary of Babel\u201d for proofs. The risk isn\u2019t just noise; it\u2019s the blend of the profound and the wrong, written in the same voice. Academia, he argues, is unprepared for the labor required to separate signal from shimmer.<\/p>\n<p>Volume: an explosion of candidate results that outpace human review cycles.<br \/>\nPlausibility: fluent arguments that look correct but conceal subtle errors.<br \/>\nVerification cost: expert time consumed checking AI work, not advancing ideas.<\/p>\n<p>What\u2019s at stake for human reasoning<\/p>\n<p>Beyond workflows, Litt sees a cultural risk. If we ask a model first, we may gradually forget how to wrestle with the unknown. Recent systems reason impressively over familiar ground, yet struggle to acquire fresh expertise the way humans do (by poking, failing, and refining). The danger is a slow outsourcing of thought, with verification replacing exploration as the human task.<\/p>\n<p>He doesn\u2019t call this irreversible, but the near term looks costly. The anxiety is not about whether AI can find answers\u2014it often can\u2014but about who keeps the craft of finding them. Mathematics, after all, is also the journey: the hours spent stuck, the spark of a new idea, and the stubborn clarity that follows. This is the case, Litt insists, even now.<\/p>\n","protected":false},"excerpt":{"rendered":"He bet his field was safe from AI; a year later, he is bracing to pay up. What&hellip;\n","protected":false},"author":2,"featured_media":539285,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-539284","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/539284","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=539284"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/539284\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/539285"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=539284"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=539284"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=539284"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}