{"id":315341,"date":"2025-11-29T00:03:14","date_gmt":"2025-11-29T00:03:14","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/315341\/"},"modified":"2025-11-29T00:03:14","modified_gmt":"2025-11-29T00:03:14","slug":"two-gen-zers-turned-down-millions-from-elon-musk-to-build-an-ai-based-on-the-human-brain-and-its-outperformed-models-from-openai-and-anthropic","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/315341\/","title":{"rendered":"Two Gen Zers turned down millions from Elon Musk to build an AI based on the human brain\u2014and it&#8217;s outperformed models from OpenAI and Anthropic"},"content":{"rendered":"<p>Two years ago, a pair of 22-year-old friends who met in high school in Michigan found themselves sitting inside Tsinghua University\u2019s brain lab in Beijing, staring down a multimillion-dollar offer from Elon Musk.<\/p>\n<p>The two had just done something unusual for the moment: they built a small large-language model (LLM) trained not on massive internet data dumps, but on a tiny, carefully chosen set of high-quality conversations. And they taught it to improve itself using reinforcement learning (RL), a technique where a model learns the way a person or animal does: by making decisions, receiving feedback, and then refining behavior through rewards and penalties.<\/p>\n<p>At the time, almost no one was doing this with language models. The only other group exploring RL for LLMs was DeepSeek, the Chinese OpenAI competitor that would later <a aria-label=\"Go to https:\/\/fortune.com\/2025\/01\/28\/donald-trump-says-deepseek-should-be-a-wake-up-call-for-u-s-tech-giants-and-sees-bombshell-china-ai-advances-as-a-good-thing\/\" class=\"\" href=\"https:\/\/fortune.com\/2025\/01\/28\/donald-trump-says-deepseek-should-be-a-wake-up-call-for-u-s-tech-giants-and-sees-bombshell-china-ai-advances-as-a-good-thing\/\" rel=\"nofollow noopener\" target=\"_blank\">terrify Silicon Valley.<\/a><\/p>\n<p>The two students, William Chen and Guan Wang, called their model OpenChat, and they open-sourced it on a whim.<\/p>\n<p>To their shock, OpenChat blew up.<\/p>\n<p>\u201cIt got very famous,\u201d Chen told Fortune. Researchers at Berkeley and Stanford pulled the code, built on top of it, and began citing the work. In academic circles, it became one of the earliest examples of how a small model trained on good data, as opposed to more data, could punch above its weight.<\/p>\n<p>Then it landed somewhere Chen never expected: Elon Musk\u2019s inbox.<\/p>\n<p>Musk sent an email through what, at the time, was his new company, xAI, which wanted to recruit the students in a multi-million dollar pay package, Chen says. It was the kind of offer young founders dreamed of.\u00a0<\/p>\n<p>They hesitated. Then, they turned it down.<\/p>\n<p>\u201cWe decided that large-language models have their limitations,\u201d Chen said. \u201cWe want a new architecture that will overcome the structural limitation of [large-scale machine learning].\u201d<\/p>\n<p>Instead of taking the deal, they left the comfortable momentum of OpenChat behind and pursued something far more ambitious: a \u201cbrain-inspired\u201d reasoning system they believed could outperform current AI models.<\/p>\n<p>That decision would lead, two years later, to Sapient Intelligence \u2014 and to a model that outperformed some of the world\u2019s biggest AI systems on tests of abstract reasoning. They are confident their model is going to be the first to achieve \u201cAGI,\u201d or \u201cartificial general intelligence, the so-called holy grail in AI research where a machine\u2019s intelligence can match or surpass that of a human in any cognitive task.<\/p>\n<p>Between the two worlds of the arms race<\/p>\n<p>Chen\u2019s path to turning down Musk didn\u2019t begin in Beijing, but in Bloomfield Hills, Michigan, and with a childhood obsession that drove his parents crazy.<\/p>\n<p>\u201cWhen I was young, I would break things apart and never put them back together,\u201d he said. \u201cThat\u2019s what got me started.\u201d<\/p>\n<p>Chan was born in China, raised partly in San Diego and Shenzhen, and eventually sent to attend Cranbrook Schools \u2014 a prestigious private boarding school in Michigan \u2014 around the time he met Wang, a boy his age who attended a different school but had an equally unusual obsession.<\/p>\n<p>On the first day they met, the two fell into a long conversation about what Chen calls their \u201cmetagoals,\u201d the ultimate purpose of their lives.<\/p>\n<p>For Wang, that metagoal was AGI, long before the term became popular. He described it in high school as an \u201calgorithm that solves any problem,\u201d since the terminology didn\u2019t exist yet. Chen\u2019s metagoal was different but complementary: optimizing everything, from engineering problems to real-world systems.<\/p>\n<p>\u201cIt was an instant alignment,\u201d Chen said.\u00a0<\/p>\n<p>Today, the two still ask every single person they hire what their metagoals are.\u00a0<\/p>\n<p>Chen founded the school\u2019s drone club, petitioned administrators to let students fly quadcopters on campus, and spent hours tinkering in robotics labs. The two were the kids who stayed late, broke hardware, and kept experimenting.<\/p>\n<p>\u201cIt was a great time,\u201d Chen said.\u00a0<\/p>\n<p>When college admissions rolled around, Chen was accepted to Carnegie Mellon and Georgia Tech \u2014 the obvious, prestigious paths for a gifted robotics student. Wang, meanwhile, had been admitted to <a aria-label=\"Go to https:\/\/fortune.com\/2025\/11\/19\/us-china-ai-race-higher-education-tsinghua-university-outpacing-ivy-league-mit-harvard-stanford\/\" class=\"\" href=\"https:\/\/fortune.com\/2025\/11\/19\/us-china-ai-race-higher-education-tsinghua-university-outpacing-ivy-league-mit-harvard-stanford\/\" rel=\"nofollow noopener\" target=\"_blank\">Tsinghua University, China\u2019s elite engineering powerhouse<\/a>, often described as \u201cChina\u2019s MIT.\u201d<\/p>\n<p>Chen visited the Beijing campus, toured the labs, and made a decision few American high schoolers would: He followed Wang to Tsinghua.\u00a0<\/p>\n<p>The transition wasn\u2019t easy. The coursework was intense, and the two struggled, even flunking some classes.<\/p>\n<p>\u201cMost of the Chinese kids are really \u2014 I hate to be stereotypical \u2014 but they\u2019re really good at studying,\u201d Chen laughed. \u201cThey\u2019re really sharp.\u201d<\/p>\n<p>Still, he was surprised by how supportive his professors were once they learned what he and Wang were building.<\/p>\n<p>\u201cThey were like, \u2018Hey, I know this thing you\u2019re trying to make \u2014 it\u2019s a very good thing. I actually believe in the concept of AGI,\u2019\u201d he said.<\/p>\n<p>By then, nearly everyone in Tsinghua\u2019s Brain Cognition and Brain-Inspired Intelligence Lab knew what the two undergraduates were attempting: a new approach to machine intelligence that challenged the dominant assumptions of the field.<\/p>\n<p>A 3 a.m. breakthrough<\/p>\n<p>It was at Tsinghua\u2019s brain lab where they developed the Hierarchical Reasoning Model (HRM), the architecture they believe can surpass transformers entirely.<\/p>\n<p>If OpenChat was their proof of concept, HRM was the moonshot they had been building towards. And the moment it proved itself came, appropriately, in the dead of night.<\/p>\n<p>On a random early morning in June this year, at 3 a.m., Chen and Wang stared at the benchmark <a aria-label=\"Go to https:\/\/arxiv.org\/pdf\/2506.21734\" class=\"\" href=\"https:\/\/arxiv.org\/pdf\/2506.21734\" rel=\"nofollow noopener\" target=\"_blank\">results<\/a> returned by their small experimental model. Their tiny HRM prototype \u2014 just 27 million parameters, microscopic compared to GPT-4 or Claude \u2014 was <a aria-label=\"Go to https:\/\/finance.yahoo.com\/news\/sapient-intelligence-open-sources-hierarchical-165502989.html\" class=\"\" href=\"https:\/\/finance.yahoo.com\/news\/sapient-intelligence-open-sources-hierarchical-165502989.html\" rel=\"nofollow noopener\" target=\"_blank\">outperforming<\/a> systems from OpenAI, Anthropic, and DeepSeek on tasks designed specifically to measure reasoning.<\/p>\n<p>It solved Sudoku-Extreme, found optimal passages through 30\u00d730 mazes, and achieved startlingly high performance on the <a aria-label=\"Go to https:\/\/arcprize.org\/arc-agi\" class=\"\" href=\"https:\/\/arcprize.org\/arc-agi\" rel=\"nofollow noopener\" target=\"_blank\">ARC-AGI benchmark<\/a> \u2014 all without chain-of-thought prompting or brute-force scaling.<\/p>\n<p>\u201cIt was crazy,\u201d Chen said.\u201cJust with a change in the architecture, it gave the model a lot of what we call reasoning depth.\u201d\u00a0<\/p>\n<p>Unlike a transformer, which predicts the next word based on statistical patterns, HRM uses a two-part recurrent structure modeled loosely on how the human brain mixes slow, deliberate thought with fast reflexive reactions. The system can plan, dissect problems, and reason using internal logic rather than imitation. \u201cIt\u2019s not guessing,\u201d Chen said. \u201cIt\u2019s thinking.\u201d<\/p>\n<p>Chen says their models hallucinate far less than traditional LLMs and already match state-of-the-art performance in time-series forecasting tasks like weather prediction, quantitative trading, and medical monitoring.<\/p>\n<p>They are now working on scaling HRM into a general-purpose reasoning engine, with a simple but radical thesis: that AGI won\u2019t come from bigger transformers, but smaller, more efficient architecture. Today\u2019s frontier models are massive \u2014 in some cases, hundreds of billions of parameters \u2014 but even their creators admit they struggle with reasoning, planning, and multi-step problem decomposition, Chen said.\u00a0<\/p>\n<p>He believes that limitation is structural, not temporary.<\/p>\n<p>\u201cYou can stack more layers,\u201d he says. \u201cBut you\u2019re still hitting the limits of a probability model.\u201d<\/p>\n<p>Sapient is now preparing to open a U.S. office within the next month, raise additional funding, and maybe change their name to begin deploying the second version of their model. The founders believe continuous learning \u2014 the ability for a model to absorb new experiences safely, without retraining from scratch \u2014 is the next major frontier.\u00a0<\/p>\n<p>\u201cAGI is the holy grail of AI\u201d Chen says. And he expects it to emerge in the next decade.\u00a0<\/p>\n<p>\u201cOne day, we\u2019re going to have an AI that\u2019s smarter than humans,\u201d Chen said. \u201cGuan and I always say it\u2019s like Pandora\u2019s box, if we\u2019re not going to make it, someone else will. So we hope that we\u2019re going to be the first one to make that happen.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Two years ago, a pair of 22-year-old friends who met in high school in Michigan found themselves sitting&hellip;\n","protected":false},"author":2,"featured_media":315342,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,3686,512,13255,3120,2565,14916,26024,105,13324],"class_list":{"0":"post-315341","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-beijing","14":"tag-china","15":"tag-colleges-and-universities","16":"tag-elon-musk","17":"tag-machine-learning","18":"tag-silicon-valley","19":"tag-start-up","20":"tag-technology","21":"tag-x"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/315341","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=315341"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/315341\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/315342"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=315341"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=315341"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=315341"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}