{"id":441672,"date":"2026-01-27T23:18:17","date_gmt":"2026-01-27T23:18:17","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/441672\/"},"modified":"2026-01-27T23:18:17","modified_gmt":"2026-01-27T23:18:17","slug":"yann-lecun-on-artificial-general-intelligence-and-the-digital-commons","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/441672\/","title":{"rendered":"Yann LeCun On Artificial General Intelligence And The Digital Commons"},"content":{"rendered":"<p><img decoding=\"async\" class=\" top-image\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2026\/01\/1769555897_893_0x0.jpg\" alt=\"yann1\" data-height=\"816\" data-width=\"1224\" fetchpriority=\"high\" style=\"position:absolute;top:0\"\/><\/p>\n<p>Yann LeCun being interviewed by John Werner at Imagination in Action, Davos Switzerland<\/p>\n<p>Patrick Tighe<\/p>\n<p>As January comes to an end, many of us who attended the annual summit at Davos are pondering next steps, considering the context of AI today, and still trying to parse the interactions between us humans, and ever-evolving AI agents that will accommodate us, inspire us, rival us, and generally make us re-evaluate our place in the world. I interviewed Yann LeCun at our annual Imagination in Action event (I put this event together, it\u2019s free to attend, and it\u2019s designed to foster discussions of timely, important topics). The result was an eye-opening series of revelations about how artificial intelligence research is changing, and what it might lead to relatively soon. <\/p>\n<p>Getting Realistic About AGI<\/p>\n<p>First, do we now \u201chave AGI?\u201d<\/p>\n<p>Speaking on the prospect of artificial general intelligence, LeCun suggested it\u2019s a misnomer, because human intelligence, in his view, is not general. He prefers the term \u201chuman-level intelligence,\u201d and while acknowledging that we are approaching this type of AI, we\u2019re not likely to see it this year, or next year. <\/p>\n<p>\u201cWe need a few conceptual breakthroughs,\u201d LeCun said, explaining the deficits of today\u2019s LLMs in more detail. The gist of his argument was this: although there are absolutely reasons to hype today\u2019s LLMs as intelligent, we have to remember that humans still have the edge in knowing how to navigate the physical world. LeCun spoke rather pointedly about this, explaining that although LLMs can do a lot of intellectual work, they don\u2019t have the world knowledge to rival humans at many aspects of life. In other words, they\u2019re book-smart, but not street-smart. <\/p>\n<p>LeCun put it this way:<\/p>\n<p>\u201cIf you want intelligent behavior, you need a system to be able to anticipate what&#8217;s going to happen in the world, and also predict the consequences of its actions. If you can do this, then it can plan a sequence of actions to arrive at a particular objective. And that&#8217;s what&#8217;s missing. That&#8217;s the concept of a world model. You\u2019re not going to get intelligent behavior without that.\u201d<\/p>\n<p>He pointed to the example of autonomous vehicles, which I thought was a good move.<\/p>\n<p>\u201cWe have millions of hours of training data to train autonomous cars, and we still don&#8217;t have level five autonomous driving (capability),\u201d he noted. \u201cSo this tells you (that) the basic architecture is not there.\u201d<\/p>\n<p>In response to this fundamental lack of real-world knowledge, LeCun suggested a \u201cphysical AI revolution\u201d is coming. But challenges remain. <\/p>\n<p>\u201cUnfortunately, the real world is messy,\u201d he said. \u201cSensory data is high-dimensional, continuous, noisy, and generative architectures do not work with this kind of data. So the type of architecture that we use for LLMs and generative AI does not apply to the real world. The next revolution of AI, which is coming fast, is going to be AI systems that understand the real world. Systems that understand high-dimensional, continuous noisy data like video, like sensor data. Systems that can build predictive models of how their environment is going to evolve, and what their effect on the environment is. Systems that can plan, they can reason at the core level. Systems that are controllable and safe, so that you give them a task, and they accomplish it.\u201d<\/p>\n<p>Yann LeCun being interviewed at Imagination in Action, Davos Switzerland<\/p>\n<p>John Werner But What About AI Agents? <\/p>\n<p>LeCun also addressed the boom in \u201cagentic AI,\u201d still contending that we will not reach human-level intelligence by building agents on LLMs, an approach that he called \u201ca disaster.\u201d<\/p>\n<p>\u201cHow can a system possibly plan a sequence of actions if it can&#8217;t predict the consequences of its actions?\u201d he asked rhetorically. \u201cSo if you want intelligent behavior, you need a system to be able to anticipate what&#8217;s going to happen in the world, and also predict the consequences of its actions. If you can do this, and it can plan a sequence of actions to arrive at a particular objective \u2026 that&#8217;s what&#8217;s missing.\u201d<\/p>\n<p>Advanced Machine Intelligence<\/p>\n<p>Recently, LeCun had made headlines with his announcement of his own business, Advanced Machine Intelligence, so I asked him about what this company wants to do, and how long it may take. <\/p>\n<p>The goal, he explained, is building systems that can work intelligently off of these world models. <\/p>\n<p>If you have such a world model (and resulting system) you can plan a sequence of actions to accomplish a task,\u201d he said, citing a vision paper he wrote on this subject, and talks from 2022 that are online. \u201cWe have systems now that we can train, completely self-supervised on unlabeled videos, and those systems understand video, represent it really well, can predict missing parts in a video\u2026 they also have acquired a certain sense of common sense.\u201d<\/p>\n<p>For example:<\/p>\n<p>\u201cIf you show (these types of models) a video where something impossible happens, they tell you \u2018this is impossible,\u2019\u201d he continued. \u201cYou throw a ball in the air, and the ball stops, or it disappears; the system says \u2018no, this is completely incompatible with what I&#8217;ve observed during my training.\u201d<\/p>\n<p>You can imagine how impressive this type of thing would be, and how it represents a radical departure from, say, your garden-variety chatbot, which, in comparison, just seems like a digital parrot. <\/p>\n<p>The foundation for this is something that LeCun pioneered called JEPA (Joint Embedding Predictive Architecture) and it\u2019s a work in progress. <\/p>\n<p>\u201cWe already have prototypes that work, but we want to generalize the methodology so that it applies to any modality, any data, any sensor data,\u201d he said. \u201cSo then we can build, from data, phenomenological models of complex systems \u2026 an industrial process of any kind, manufacturing process, chemical plant, a turbo jet engine, a whole airplane, perhaps, you know, chemical reactions, a living cell. Everything in the world is complicated, because it&#8217;s an emerging collective phenomenon of really complex systems, and we can only build (limited) models of those things.\u201d<\/p>\n<p>Digital Twinning, and the LaPlace Demon<\/p>\n<p>I had already been thinking that the above approach sounds a lot like extraordinarily complex digital twinning, but LeCun suggested there\u2019s a level of abstraction that we have to factor in. The next part of the interview became fairly profound, as he compared the idea of simulating everything in an extremely complex system to coming at things with more of a diagnostic and targeted view. <\/p>\n<p>\u201cThe way we can understand what&#8217;s taking place right now in this room is through psychology, maybe a little bit of science, you know, things like that,\u201d LeCun explained. \u201cNot at the level of quantum field theory, or particle physics, or atomic physics, or molecules, or proteins or \u2026 cells or organisms.\u201d<\/p>\n<p>Is Alignment the Right Frame? <\/p>\n<p>Another thing that I asked LeCun is about AI alignment, the frantic processes of companies and people to try to direct AI in appropriate ways. <\/p>\n<p>The bottom line, he suggested, is that tomorrow\u2019s systems will be different, and our impression that we\u2019ll be working with dressed-up LLMs as human-like entities is misguided. LeCun noted:<\/p>\n<p>\u201cIf you imagine that future AI systems that have humanlike intelligence will be LLMs, which of course is not going to happen, you say, \u2018Oh my god, that&#8217;s going to be dangerous.\u2019&#8221; <\/p>\n<p>If, on the other hand, you think of these future systems as world-responders that are objective-driven and smarter in specific ways, you can see that the problem will, largely, be solved. <\/p>\n<p>The Digital Commons<\/p>\n<p>Another point that LeCun spoke about at length is the need for open systems and open research.<\/p>\n<p>In building \u201cpredictive architecture,\u201d and developing context capabilities for AI agents, he suggested, we need to apply the open source philosophy, a \u201cconsortium\u201d approach, and not a set of walled towers. LeCun noted how the best open source models are often Chinese, and how stakeholders use these open models to innovate.<\/p>\n<p>He also was pretty assertive about using a \u201cbottom up, not top down\u201d approach to AI, which cuts against the grain of the industrial views of the twentieth century (and certainly, of the feudal centuries before it) \u2013 LeCun argues we need a new framework for the new millennium, one where knowledge is not siloed, and humanity works together for the common good.<\/p>\n<p>Some Challenges, and Solutions<\/p>\n<p>Later, LeCun went over some of the potential pitfalls of this important change, like the concentration of power, and the potential for human misuse of AI.<\/p>\n<p>\u201cThe most important risk of AI is that in the near future, where our entire digital diet will be mediated by AI systems, if those AI systems come from a handful of proprietary companies on the west coast of the U.S. or China, we&#8217;re in big trouble for the health of democracy, cultural diversity, linguistic diversity, value systems,\u201d he said. \u201cSo we need a highly diverse population of AI assistance, for the same reason we need diversity in the press, and that can only happen with open source.\u201d<\/p>\n<p>The open communities that LeCun is asking for are in play, for example, in academia: check out this<a href=\"https:\/\/drexel.edu\/provost\/ai\/digital-commons\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" data-ga-track=\"ExternalLink:https:\/\/drexel.edu\/provost\/ai\/digital-commons\" aria-label=\"example from Drexel\"> example from Drexel<\/a> where planners are trying to build this type of open forum.<\/p>\n<p>\u201cThe Digital Commons is designed to foster an open, supportive, and collaborative environment where faculty and professional staff at Drexel can explore and share the creative ways they are using AI,\u201d spokespersons write.<\/p>\n<p>We have this kind of mentality happening at MIT, too, where the Media Lab and other offices are building those connective tissues where AI work can benefit from broad collaboration.<\/p>\n<p>You can watch the rest of LeCun\u2019s interview<a href=\"https:\/\/www.youtube.com\/watch?v=x-ifTBivhLE\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" data-ga-track=\"ExternalLink:https:\/\/www.youtube.com\/watch?v=x-ifTBivhLE\" aria-label=\"here\"> here<\/a>, or navigate over to YouTube. The central idea, the idea of open source tech and open research, is driving a generation of innovators who understand that this is the key to egalitarian outcomes in the twenty-first century. Let\u2019s keep that in mind as we move forward, and also stay open to ideas about objective-driven models that can understand the world around them.  <\/p>\n<p>Yann LeCun being interviewed at Imagination in Action, Davos Switzerland<\/p>\n<p>Patrick Tighe<\/p>\n","protected":false},"excerpt":{"rendered":"Yann LeCun being interviewed by John Werner at Imagination in Action, Davos Switzerland Patrick Tighe As January comes&hellip;\n","protected":false},"author":2,"featured_media":441673,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,87119,10839,105],"class_list":{"0":"post-441672","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-enterprise-tech","14":"tag-policy","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/441672","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=441672"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/441672\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/441673"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=441672"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=441672"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=441672"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}