{"id":309551,"date":"2026-03-02T15:35:21","date_gmt":"2026-03-02T15:35:21","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/309551\/"},"modified":"2026-03-02T15:35:21","modified_gmt":"2026-03-02T15:35:21","slug":"superintelligence-is-already-here-today","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/309551\/","title":{"rendered":"Superintelligence is already here, today"},"content":{"rendered":"<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!PaoU!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67e2d8b-6e4c-40e7-b90a-525b74e2d823_960x540.jpeg\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/nz\/wp-content\/uploads\/2026\/03\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/c67e2d8b-6e4c-40e7-b90a-525b74e2d823_960x.jpeg\" width=\"716\" height=\"402.75\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/c67e2d8b-6e4c-40e7-b90a-525b74e2d823_960x540.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:540,&quot;width&quot;:960,&quot;resizeWidth&quot;:716,&quot;bytes&quot;:157000,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https:\/\/www.noahpinion.blog\/i\/189385888?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67e2d8b-6e4c-40e7-b90a-525b74e2d823_960x540.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   fetchpriority=\"high\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>People argue back and forth about when artificial superintelligence will arrive. The truth is that it\u2019s already here. <\/p>\n<p>Go back a hundred years, and the popular notion of \u201cintelligence\u201d would probably include things like calculating speed and memorization. Then we invented computers, which could memorize and recall infinitely more things than we could, and do calculations infinitely faster. But we didn\u2019t want to call those capabilities \u201cintelligence\u201d, because we recognized that although they were very powerful, they were very narrow. So we started to use the word \u201cintelligence\u201d to refer to the things machines still couldn\u2019t do \u2014 various forms of pattern-matching, logical reasoning, communicating through natural language, and so on. <\/p>\n<p>Even before the invention of AI, though, computers were already participating in frontier research. <a href=\"https:\/\/en.wikipedia.org\/wiki\/Four_color_theorem\" rel=\"nofollow noopener\" target=\"_blank\">The four-color theorem<\/a> is a famously hard math problem that stumped humans until the 1970s, when some mathematicians <a href=\"https:\/\/home.adelphi.edu\/~bradley\/HOMSIGMAA\/Walters.pdf\" rel=\"nofollow noopener\" target=\"_blank\">used a computer to prove it<\/a>. The humans figured out that the theorem could be proven by brute force, just by checking a very large number of cases. So the computer did a mental task that humans couldn\u2019t, and the result was a scientific breakthrough. <\/p>\n<p>In the 2020s, we invented computer systems that could do most of the kinds of cognitive tasks that previously only humans could do. They can read, understand, and speak in human language. They can <a href=\"https:\/\/www.theatlantic.com\/technology\/2026\/02\/ai-math-terrance-tao\/686107\/\" rel=\"nofollow noopener\" target=\"_blank\">do mathematics<\/a>, which is really just a language with very formal rules (this means they can also <a href=\"https:\/\/www.science.org\/content\/article\/chatgpt-spits-out-surprising-insight-particle-physics\" rel=\"nofollow noopener\" target=\"_blank\">do theoretical physics<\/a>). They can recognize complex patterns of knowledge embedded in written text, and apply those patterns to produce <a href=\"https:\/\/www.theatlantic.com\/technology\/2026\/02\/ai-prediction-human-forecasters\/685955\/\" rel=\"nofollow noopener\" target=\"_blank\">actionable insights<\/a>. They can write software, because software is also just a language with formal rules. It turns out that all computers really needed in order to do all of this stuff was A) statistical regressions to identify patterns probabilistically, and B) a very large amount of computing power. <\/p>\n<p>This doesn\u2019t mean that AI can now do everything a human being can do. <a href=\"https:\/\/www.nber.org\/papers\/w34712?utm_campaign=ntwh&amp;utm_medium=email&amp;utm_source=ntwg14\" rel=\"nofollow noopener\" target=\"_blank\">Its intelligence is \u201cjagged\u201d<\/a> \u2014 there are still some things humans are better at. But this is also true of human beings\u2019 advantages over animals. Did you know that chimps are better than humans at <a href=\"https:\/\/www.science.org\/content\/article\/chimps-best-humans-game-theory\" rel=\"nofollow noopener\" target=\"_blank\">game theory<\/a> and have <a href=\"https:\/\/bigthink.com\/life\/chimpanzees-beat-humans\/\" rel=\"nofollow noopener\" target=\"_blank\">better working memory<\/a>? My rabbit can distinguish sounds much more sensitively than I can. If we were capable of creating business contracts with chimps and rabbits, we might even pay them for these services. Similarly, AI might not take all of humans\u2019 jobs. But no one in the world thinks that chimps\u2019 and rabbits\u2019 superiority on a narrow set of cognitive tasks means that humans \u201caren\u2019t truly intelligent\u201d. We are jagged general intelligences as well. <\/p>\n<p>Most of the benchmarks that aim to measure whether we\u2019ve achieved \u201cAGI\u201d \u2014 things like ARC-AGI and Humanity\u2019s Last Exam \u2014 focus on the kinds of things that computers couldn\u2019t do in 2021 \u2014 things that gave humans our irreplaceable cognitive edge before AI came along, and made us highly complementary to computers. And most of the discussion around \u201cAGI\u201d is about when AI will surpass humans at everything. For example, Metaculus forecasters still think AGI is in the future:<\/p>\n<p>This may be the most important question from an economic standpoint \u2014 i.e., whether we expect AI to replace human jobs or augment them. But if what we\u2019re talking about is domination of the planet\u2019s resources, and <a href=\"https:\/\/www.noahpinion.blog\/p\/you-are-no-longer-the-smartest-type\" rel=\"nofollow noopener\" target=\"_blank\">control of the destiny of life on Earth<\/a>, we don\u2019t actually need AI to be better at every cognitive task. Humans conquered the planet from animals despite having worse short-term memories than chimps and being worse at differentiating sounds than rabbits. <\/p>\n<p>In fact, I bet that if AI had A) permanent autonomy and long-term memory, B) highly capable robots, and C) end-to-end automation of the AI production chain, it could defeat humans and take control of Earth today. I might be wrong about that, but if so, I doubt I\u2019ll be wrong three or four years from now. In any case, if we decide we don\u2019t want to hand over control of the planet to an alien intelligence, we should think about restricting either A) full autonomy, B) robots, and\/or C) full automation of the AI production chain. <\/p>\n<p>That\u2019s a sidetrack from my real point, though. My real point here is that AI, as it exists today, is already superintelligent. The reason is that AI can already do language and concepts and pattern recognition well enough, while also being able to do all the superhuman, fantastic, incredibly powerful things that a computer could do in 2021. <\/p>\n<p>Right now, today, AI can do mental tasks that no human can do. In a few minutes, it can read an entire scientific literature, and extract many of the basic conclusions and insights from that literature. No human can do that. A single human can be an expert in one or two complex subjects; an AI can be an expert in all of them at once. A human needs to eat and sleep and take breaks; an AI agent can work tirelessly at proving a theorem or writing code. And AI can prove theorems and write code \u2014 or write paragraphs of text \u2014 much, much faster than any human. <\/p>\n<p>These are all superhuman cognitive capabilities. They go far, far beyond anything that even the smartest human being can do. They are the result of combining the roughly human-level language ability, pattern recognition, and conceptual analysis of an LLM with the pre-2022 superhuman memory, speed, and processing power. <\/p>\n<p>I don\u2019t want to get sidetracked here, but I think there\u2019s a nonzero chance that AI never gets much better than humans at most of the things that humans were better than computers at in 2021. It seems possible that humans are simply incredibly specialized in a few types of cognitive tasks \u2014 extracting patterns from sparse data, synthesizing various patterns into \u201cintuition\u201d and \u201cjudgement\u201d, and communicating those patterns in language \u2014 and that we\u2019ve basically approached the theoretical maximum in those narrow areas. <\/p>\n<p>That would explain why AI has gotten much better at things like math and coding and forecasting over the last year, but why the basic chatbot interface doesn\u2019t seem much more \u201cintelligent\u201d. It would also explain why when you talk to Terence Tao about math, it\u2019s like talking to a superhuman, but when you talk to him about where to get lunch or which movies are the best, he\u2019ll just sound like a fairly smart normal dude. AI will eventually get better than Tao at math, because it\u2019s a computer, and computers are inherently good at math \u2014 but it may never get much better than the most thoughtful, eloquent humans at deciding where to get lunch or recommending movies. It may simply not be mathematically possible to get much better than we already are at that sort of thing.<\/p>\n<p>In fact, this is what AI is basically like in Star Trek: The Next Generation, my favorite science fiction show of all time \u2014 and the one that I think best predicted modern AI. The show has two types of AGI \u2014 the ship\u2019s computer, which eventually creates superhuman sentience via the Holodeck, and Data, an android built to simulate human intelligence. Both the ship\u2019s computer and Data are approximately human-equivalent when it comes to taste, judgement, intuition, and conversational ability. But they are far superior when it comes to math, scientific modeling, and so on. <\/p>\n<p>It makes sense that the big differentiator between humans and AI would not be superior taste, judgement, and intuition, but things like computation speed and memory. Those are things humans are especially weak at, because we have very limited room in our little organic brains. It makes sense that humans would evolve to specialize in the type of thing we could get maximum leverage out of \u2014 recognizing and communicating patterns embedded in sparse data. And it makes sense that when we started automating cognitive tasks, we started out by going for the things we were weakest at, because those had the greatest marginal benefit. <\/p>\n<p>In other words, the advent of LLMs, reasoning chains, and agents may simply be a \u201clast mile\u201d event in terms of creating superhuman intelligence \u2014 filling in an essential gap that humans were previously specialized to fill. The biggest marginal gains of AI over human brains may always come from the pieces we already had in place before 2022 \u2014 the ability to scan a whole corpus of literature in seconds, to perform computations at lightning speed, and to hold vast amounts of information in working memory. <\/p>\n<p>This means that despite still being \u201cjagged\u201d and still being only human-equivalent on certain benchmarks, AI is ready to start pushing the boundaries of scientific research in a big, big way. <\/p>\n<p>Let\u2019s start with math, which AI is especially good at doing. The famous mathematician Paul Erd\u0151s <a href=\"https:\/\/www.erdosproblems.com\/\" rel=\"nofollow noopener\" target=\"_blank\">made around 1,179 conjectures<\/a>, around 41% of which have been solved. These are known as the Erd\u0151s Problems. They\u2019re not the hardest problems in math, or the most interesting. But they\u2019re hard enough that no one has ever bothered to go solve them, so they represent novel mathematics. And in recent months, AI has <a href=\"https:\/\/www.scientificamerican.com\/article\/ai-uncovers-solutions-to-erdos-problems-moving-closer-to-transforming-math\/\" rel=\"nofollow noopener\" target=\"_blank\">begun solving Erd\u0151s Problems<\/a> \u2014 sometimes in cooperation with human mathematicians, but sometimes in an automatic, push-button sort of way:<\/p>\n<p>According to a webpage started by the mathematician Terence Tao, <a href=\"https:\/\/github.com\/teorth\/erdosproblems\/wiki\/AI-contributions-to-Erd%C5%91s-problems\" rel=\"nofollow noopener\" target=\"_blank\">AI tools have helped transfer about 100 Erd\u0151s problems into the \u201csolved\u201d column<\/a> since October. The bulk of this assistance has been a kind of souped-up literature search, as it was with Sawhney\u2019s initial success. But in many cases, LLMs have pieced together extant theorems\u2014often in dialogue with their mathematician prompters\u2014to form new or improved solutions to these niche problems. In at least two cases, an LLM was even able to construct an original and valid proof to one that had never been solved, with little input from a human.<\/p>\n<p>Some people have been quick to pooh-pooh this accomplishment, declaring that Erd\u0151s Problems are no big deal. But Terence Tao, widely acknowledged as the world\u2019s best mathematician, sees the potential. Here are some excerpts from <a href=\"https:\/\/www.theatlantic.com\/technology\/2026\/02\/ai-math-terrance-tao\/686107\/\" rel=\"nofollow noopener\" target=\"_blank\">his interview<\/a> with The Atlantic\u2019s Matteo Wong:<\/p>\n<p>In these Erd\u0151s Problems in particular, there\u2019s a small core of high-profile problems that we really want to solve, and then there\u2019s this long tail of very obscure problems. What AI has been very good at is systematically exploring this long tail and knocking off the easiest of the problems. But it\u2019s very different from a human style. Humans would not systematically go through all 1,000 problems and pick the 12 easiest ones to work on, which is kind of what the AIs are doing.<\/p>\n<p>And here is what Tao said in <a href=\"https:\/\/x.com\/rohanpaul_ai\/status\/2023321243018199278\" rel=\"nofollow\">a recent talk<\/a> about AI and math:<\/p>\n<p>To me, these advances show there is a complementary way to do mathematics. Humans traditionally work in small groups on hard problems for months, and we will keep doing that\u2026But we can also now set AI to scale: sweep a thousand problems and pick up all the low-hanging fruit. Figure out all the ways to match problems to methods. If there are 20 different techniques, apply them all to 1,000 problems and see which ones can be solved by these methods. This is the capability that is present today.<\/p>\n<p>Tao understands that automated research could help solve the herding problem in science. There are a limited number of human scientists, and they have a limited amount of time. They\u2019re highly motivated to work on things that interest them, and\/or on things that will get them fame if they succeed. This leads to an interesting version of the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Streetlight_effect\" rel=\"nofollow noopener\" target=\"_blank\">streetlight problem<\/a>; when the key scarce resource is the attention and effort of smart humans, lots of boring or seemingly incremental advances get overlooked. <\/p>\n<p>In mathematics, AI is just going to blaze through those boring or tedious or seemingly uninteresting problems. It\u2019s a computer \u2014 it\u2019s tireless, its memory and processing speed are essentially infinite, and it doesn\u2019t get bored. Here is <a href=\"https:\/\/x.com\/SebastienBubeck\/status\/1958198661139009862\" rel=\"nofollow\">another example<\/a> of a fully automated mathematics breakthrough that doesn\u2019t involve Erd\u0151s Problems. And <a href=\"https:\/\/openai.com\/index\/new-result-theoretical-physics\/?fbclid=PAdGRleAQFm_pleHRuA2FlbQIxMQBzcnRjBmFwcF9pZA8xMjQwMjQ1NzQyODc0MTQAAadtH6Qiqe7LSZsdSlFRyW7gnpYT_mj80J9Id9rulq6QSyAFNqELU5PcVk_NwQ_aem_169jhosLc-ZiK66k23yyOQ\" rel=\"nofollow noopener\" target=\"_blank\">here is an example from theoretical physics<\/a>, where AI showed that there can be a kind of particle interaction that physicists had assumed couldn\u2019t happen. <\/p>\n<p>Solving a huge number of minor problems might sound like small potatoes, but it\u2019s not. <a href=\"https:\/\/www.noahpinion.blog\/p\/china-has-invented-a-whole-new-way\" rel=\"nofollow noopener\" target=\"_blank\">China\u2019s innovation system<\/a> has already shown how a huge number of incremental results can add up to a big difference in a society\u2019s overall technology level. And occasionally one of those incremental results \u2014 some obscure theorem or method \u2014 will turn out to be useful for a big breakthrough or a more important problem. In fact, sometimes great discoveries happen entirely by accident \u2014 <a href=\"https:\/\/hsm.stackexchange.com\/questions\/18434\/where-did-this-quote-on-vectors-by-lord-kelvin-originate\" rel=\"nofollow noopener\" target=\"_blank\">no one knew what vectors were good for<\/a> when they were first invented, but linear algebra ended up being arguably the most useful form of math ever invented. <a href=\"https:\/\/www.xprize.org\/news\/ten-major-breakthroughs-that-were-happy-accidents\" rel=\"nofollow noopener\" target=\"_blank\">This happens in natural science<\/a> too \u2014 witness the discovery of penicillin, x-rays, insulin, or radioactivity. <\/p>\n<p>But that\u2019s only the beginning of how AI \u2014 not the AI of the future, but the technology that exists today \u2014 is going to accelerate science. Because AI is a computer, it can act as a tireless, incredibly fast, all-knowing research assistant. Here\u2019s Tao again:<\/p>\n<p>[O]ver the next few months, I think we\u2019re going to have all kinds of hybrid, human-AI contributions\u2026Today there are a lot of very tedious types of mathematics that we don\u2019t like doing, so we look for clever ways to get around them. But AIs will just happily blast through those tedious computations. When we integrate AI with human workflows, we can just glide over these obstacles\u2026We are basically seeing AIs used on par with the contribution that I would expect a junior human co-author to make, especially one who\u2019s very happy to do grunt work and work out a lot of tedious cases.<\/p>\n<p>This \u201cautomated research assistant\u201d is <a href=\"https:\/\/x.com\/kimmonismus\/status\/2021685565696225323\" rel=\"nofollow\">getting more incredible every day<\/a>:<\/p>\n<p>Google DeepMind has unveiled Gemini Deep Think\u2019s leap from Olympiad-level math to real-world scientific breakthroughs with their internal model &#8220;Aletheia&#8221;\u2026&#8221;Aletheia&#8221; autonomously solved open math problems (including four from the Erd\u0151s database), contributed to publishable papers, and helped crack challenges in algorithms, economics, ML optimization, and even cosmic string physics\u20262.5 years ago chatbots werent even able to solve simple math problems. <\/p>\n<p>&#8220;We are witnessing a fundamental shift in the scientific workflow. As Gemini evolves, it acts as &#8220;force multiplier&#8221; for human intellect, handling knowledge retrieval and rigorous verification so scientists can focus on conceptual depth and creative direction. Whether refining proofs, hunting for counterexamples, or linking disconnected fields, AI is becoming a valuable collaborator in the next chapter of scientific progress.&#8221;<\/p>\n<p><a href=\"https:\/\/www.daniellitt.com\/blog\/2026\/2\/20\/mathematics-in-the-library-of-babel\" rel=\"nofollow noopener\" target=\"_blank\">Here\u2019s a long and very good post<\/a> by mathematician Daniel Litt on how AI is going to boost productivity in his field. Notably, he doesn\u2019t see full push-button automation of research coming soon, but instead sees AI as a massive productivity-booster.<\/p>\n<p>Math (and math-like fields like theoretical physics and theoretical economics) represents only one area of research, though; every field has different requirements. And in other fields, researchers are using AI to boost their capabilities in various ways. This is from <a href=\"https:\/\/x.com\/RaziaAliani\/status\/2020128285187776884\" rel=\"nofollow\">Raza Aliani\u2019s summary<\/a> of a Google paper that summarizes some of these methods:<\/p>\n<p>In one case, the AI was used as an adversarial reviewer and caught a serious flaw in a cryptography proof that had passed human review. That\u2019s a very different use than \u201csummarise this PDF.\u201d\u2026<\/p>\n<p>The model links tools from very different fields (for example, using theorems from geometry\/measure theory to make progress on algorithms questions). This is where its wide reading really matters\u2026<\/p>\n<p>Humans still choose the problems, check every proof, and decide what\u2019s actually new. The model is there to suggest ideas, spot gaps, and do the heavy algebra\u2026In some projects, they plug Gemini into a loop where it\u2026proposes a mathematical expression\u2026writes code to test it\u2026reads the error messages, and\u2026fixes itself. (humans only step in when something promising appears)[.]<\/p>\n<p>Again, we see that AI\u2019s pure scientific reasoning ability is only up to that of a fairly smart human, but its computer-like abilities \u2014 speed, meticulousness, memory, and so on \u2014 make it superintelligent. <\/p>\n<p>And here\u2019s Google <a href=\"https:\/\/x.com\/OpenAI\/status\/2019488071134347605\" rel=\"nofollow\">doing something similar<\/a> in biology:<\/p>\n<p>We worked with Ginkgo to connect GPT-5 to an autonomous lab, so it could propose experiments, run them at scale, learn from the results, and decide what to try next. That closed loop brought protein production cost down by 40%.<\/p>\n<p>Ole Lehmann <a href=\"https:\/\/x.com\/itsolelehmann\/status\/2019721685751288103\" rel=\"nofollow\">points out how incredible<\/a> and game-changing this is:<\/p>\n<p>The 40% cost reduction is amazing but still kind of undersells it\u2026The real number is the time compression\u2026A human researcher might test 20-30 combinations in a good month. This system tested 6,000 per iteration\u2026(Which is roughly 150 years of traditional lab work compressed into a few weeks, if you want to feel something about that)\u2026Drug discovery, materials science, synthetic biology, basically any field where the bottleneck is &#8220;we need to try thousands of things to find what works&#8221; just got its timeline crushed\u2026The second-order effects of this will be insane[.]<\/p>\n<p>Here\u2019s a post by Andy Hall, describing how he\u2019s using agentic AI to get a lot more done:<\/p>\n<p><a native=\"true\" href=\"https:\/\/freesystems.substack.com\/p\/the-100x-research-institution?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web\" rel=\"nofollow noopener\" class=\"embedded-post\" target=\"_blank\"><\/p>\n<p>The 100x Research Institution<\/p>\n<p>For the past few months, I\u2019ve been running an experiment that felt both thrilling and vaguely unsettling: could I automate myself? And what would that mean for the future of academic research like mine\u2026<\/p>\n<p>Read more<\/p>\n<p>2 months ago \u00b7 56 likes \u00b7 8 comments \u00b7 Andy Hall<\/p>\n<p><\/a><\/p>\n<p>Even when AI can\u2019t be trusted to do much of the research process on its own, it can automate much of the grunt work of doing literature searches, checking results, writing papers, creating data presentations, and so on. Here is climate scientist Zeke Hausfather, describing a bunch of ways that AI has accelerated his own workflow:<\/p>\n<p><a native=\"true\" href=\"https:\/\/www.theclimatebrink.com\/p\/the-ai-augmented-scientist?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web\" rel=\"nofollow noopener\" class=\"embedded-post\" target=\"_blank\"><\/p>\n<p>The AI-Augmented Scientist<\/p>\n<p>I was reminded of Arthur C. Clark\u2019s famous third law the other day, that \u201cany sufficiently advanced technology is indistinguishable from magic.\u201d I\u2019d recently gotten Claude Code set up on my computer, and was using it to help write the code for some reduced-complexity climate model\u2026<\/p>\n<p>Read more<\/p>\n<p>6 days ago \u00b7 100 likes \u00b7 52 comments \u00b7 Zeke Hausfather<\/p>\n<p><\/a><\/p>\n<p>And here is economist John Cochrane, talking about how AI now checks his papers and makes helpful suggestions and finds errors:<\/p>\n<p><a native=\"true\" href=\"https:\/\/www.grumpy-economist.com\/p\/refine?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web\" rel=\"nofollow noopener\" class=\"embedded-post\" target=\"_blank\"><\/p>\n<p>I recently tried refine, an AI tool for refining academic articles, developed by Yann Calv\u00f3 L\u00f3pez and Ben Golub. I sent it the current draft of my booklet on inflation, to see what it can offer. I just used it once so far, with the free trial mode. I will be a regular user forever\u2026<\/p>\n<p>Read more<\/p>\n<p>6 days ago \u00b7 124 likes \u00b7 28 comments \u00b7 John H. Cochrane<\/p>\n<p><\/a><\/p>\n<p>Even Terence Tao <a href=\"https:\/\/x.com\/slow_developer\/status\/2026009163860673005\" rel=\"nofollow\">found an error in one of his papers<\/a> using AI! <\/p>\n<p><a href=\"https:\/\/x.com\/oliviscusAI\/status\/2022614202062340384\" rel=\"nofollow\">Here\u2019s a Google tool<\/a> that will generate publication-ready scientific illustrations at the touch of a button. <a href=\"https:\/\/x.com\/nberpubs\/status\/2024574909662290197\" rel=\"nofollow\">Here\u2019s a software package<\/a> that will quantify the attributes of large qualitative datasets \u2014 something very useful for social science research. <a href=\"https:\/\/www.nature.com\/articles\/s42256-026-01188-x\" rel=\"nofollow noopener\" target=\"_blank\">Here\u2019s a paper<\/a> about how AI can enhance the quality of peer review. <a href=\"https:\/\/x.com\/GabeLenz\/status\/2026160377718022522\" rel=\"nofollow\">Here\u2019s Gabriel Lenz<\/a> describing how AI makes it much quicker and easier to write a data-heavy book. <\/p>\n<p>And remember, these are only the AI tools that exist today. Superintelligence is already here, thanks to AI\u2019s ability to combine human-level reasoning with the mental superpowers of a computer. But AI is improving by leaps and bounds every day. It may achieve superhuman reasoning ability soon. In math, I will be surprised if it doesn\u2019t. But even if not, advances in agents\u2019 ability to handle long tasks, synthesize results, process vast and varied data, and extract insights from vast scientific literatures will likely be far better in a couple years compared to now. <\/p>\n<p>Is AI already supercharging science? That\u2019s not clear yet. <a href=\"https:\/\/x.com\/SolomonMg\/status\/2026047946890822104\" rel=\"nofollow\">Publications are way up<\/a>, and scientists who use AI have experienced <a href=\"https:\/\/x.com\/krisgulati\/status\/2005069573255823631\" rel=\"nofollow\">a huge bump in productivity<\/a>. A lot of this content <a href=\"https:\/\/x.com\/littmath\/status\/2005651730319781933\" rel=\"nofollow\">seems to be low-quality slop<\/a> so far, so there\u2019s an open question of whether AI-generated content will overwhelm the existing review process. Unscrupulous scientists can also jailbreak AI models and <a href=\"https:\/\/x.com\/ahall_research\/status\/2024544040784720365\" rel=\"nofollow\">have them p-hack their way<\/a> to spurious results. But in a few months, and certainly in a few years, I think it\u2019ll be clear that AI has been a game-changer.<\/p>\n<p>A lot of people who think about the risks of superintelligence \u2014 and <a href=\"https:\/\/www.noahpinion.blog\/p\/updated-thoughts-on-ai-risk\" rel=\"nofollow noopener\" target=\"_blank\">those risks are very real<\/a> \u2014 ask what the upside is. Why would we invent a technology that has the capability to end human civilization? What might we get that could possibly justify that risk?<\/p>\n<p>I don\u2019t know where the cost\/benefit calculation lies. But I\u2019m pretty sure that the #1 answer to this question is better science. Before AI showed up, scientific discovery was hitting a wall \u2014 the picking of much of the Universe\u2019s low-hanging fruit meant that ideas were <a href=\"https:\/\/web.stanford.edu\/~chadj\/IdeaPF.pdf\" rel=\"nofollow noopener\" target=\"_blank\">getting more expensive to find<\/a>, and requiring research manpower that the human race simply <a href=\"https:\/\/web.stanford.edu\/~chadj\/emptyplanet.pdf\" rel=\"nofollow noopener\" target=\"_blank\">was not producing at sufficient scale<\/a>. <\/p>\n<p>Now, thanks to the invention of superintelligence and the supercharging of scientific productivity, we will be able to break through that wall. Fantastic sci-fi materials, robots that can do anything we want, and therapies that can cure any disease are just the beginning. There is a whole lot left to discover about this Universe, and thanks to superintelligence, a lot more of it is going to get discovered.<\/p>\n<p>I just hope humans will still be around to see that future. <\/p>\n<p data-attrs=\"{&quot;url&quot;:&quot;https:\/\/www.noahpinion.blog\/p\/superintelligence-is-already-here?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\" class=\"button-wrapper\"><a href=\"https:\/\/www.noahpinion.blog\/p\/superintelligence-is-already-here?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share\" rel=\"nofollow noopener\" class=\"button primary\" target=\"_blank\">Share<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"People argue back and forth about when artificial superintelligence will arrive. The truth is that it\u2019s already here.&hellip;\n","protected":false},"author":2,"featured_media":309552,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[365,363,364,111,139,69,145],"class_list":{"0":"post-309551","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-new-zealand","12":"tag-newzealand","13":"tag-nz","14":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/309551","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=309551"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/309551\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/309552"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=309551"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=309551"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=309551"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}