{"id":79065,"date":"2025-10-16T19:28:08","date_gmt":"2025-10-16T19:28:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/79065\/"},"modified":"2025-10-16T19:28:08","modified_gmt":"2025-10-16T19:28:08","slug":"mit-pushes-ai-toward-self-learning-with-seal-framework","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/79065\/","title":{"rendered":"MIT Pushes AI Toward Self-Learning With SEAL Framework"},"content":{"rendered":"<p>Until now, large language models have relied on human-led retraining to adjust their reasoning and update the parameters that shape their understanding. Once deployed, their weights remain static, they can process new information but cannot internalize it, a limitation that could keep these models reactive when business decisions demand real-time adaptation.<\/p>\n<p>Researchers at the Massachusetts Institute of Technology have developed the <a class=\"editor-rtfLink\" href=\"https:\/\/arxiv.org\/pdf\/2506.10943\" target=\"_blank\" rel=\"noopener nofollow\">Self-Adapting Language Models (SEAL) framework<\/a> to address that limitation. SEAL helps artificial intelligence (AI) systems get updated and adjust automatically, reducing reliance on manual retraining and improving how models learn from new information.<\/p>\n<p>The Problem With Fixed Knowledge<\/p>\n<p>Large language models have transformed how quickly organizations can find and interpret information. Systems such as <a class=\"editor-rtfLink\" href=\"https:\/\/openai.com\/index\/introducing-gpt-5-for-developers\/\" target=\"_blank\" rel=\"noopener nofollow\">GPT-5<\/a>, Claude 3.5, and Gemini 2.0 can retrieve the Federal Reserve\u2019s latest policy statement or a company\u2019s earnings report in seconds and summarize the key points with impressive accuracy.<\/p>\n<p>That capability depends on retrieval, a process that allows a model to look up relevant data without changing how it reasons. Retrieval tells a system where to find information but not how to update its understanding based on what it learns. Once the task ends, the model\u2019s internal logic, stored in billions of parameters or weights, remains unchanged.<\/p>\n<p>In contrast, updating weights is more like being told, \u201cHere is new information or a new way to think, update your understanding so you can now answer slightly different, or even completely different, but structurally similar, questions.\u201d Updating weights lets a model connect new information to what it already knows, helping it understands implications rather than just isolated facts.<\/p>\n<p>Imagine a model used for loan approvals. Retrieval lets it pull the latest credit reports or policy updates before making a decision. But if new guidelines redefine what counts as a high-risk borrower, the model can read the update yet still evaluate applications using outdated thresholds. A model that updates its weights continuously is likely to infer such change and adjust its reasoning automatically for future applications.<\/p>\n<p style=\"text-align:center\">Advertisement: Scroll to Continue<\/p>\n<p>Retrieval keeps a model informed, while weight updates make it adaptive. That is the gap the SEAL\u00a0framework aims to close by exploring whether models can refine their understanding automatically when they encounter new information.<\/p>\n<p>How SEAL Works<\/p>\n<p>SEAL introduces a training loop that allows a model to generate its own learning instructions. The model writes what MIT calls self-edits, or short written explanations of what new material it wants to learn and how it should adjust its reasoning. It then generates example data to test those changes and keeps only the updates that improve its performance.<\/p>\n<p>MIT tested SEAL on <a class=\"editor-rtfLink\" href=\"https:\/\/www.llama.com\/\" target=\"_blank\" rel=\"noopener nofollow\">Meta\u2019s Llama model<\/a>, an open-weight system that lets researchers observe how parameter updates affect results. Open models like Llama make it possible to measure how self-directed learning works, which is not yet possible with closed commercial systems such as GPT-5 or Gemini.<\/p>\n<p>In experiments, SEAL helped Llama adapt to new tasks using only a handful of examples, achieving about 72% accuracy compared with 20% for standard fine-tuning. It also incorporated factual updates more efficiently than models trained on data generated by GPT-4. The findings suggest that future AI systems could continuously update their reasoning without requiring full retraining cycles.<\/p>\n<p>Implications for Financial Institutions<\/p>\n<p>For financial institutions, SEAL represents an early look at how AI systems might evolve from reactive to adaptive. In today\u2019s environment, models that power credit underwriting, portfolio analysis, or compliance monitoring are retrained periodically when regulations or market data change. A self-adapting framework could shorten that cycle by allowing systems to learn from new information as it appears, reducing the lag between discovery and response.<\/p>\n<p>This evolution arrives as regulators and central banks are paying closer attention to AI\u2019s growing role in financial infrastructure. A recent <a class=\"editor-rtfLink\" href=\"https:\/\/www.pymnts.com\/artificial-intelligence-2\/2025\/fsb-and-bis-warn-financial-authorities-about-potential-risks-of-ai\/\" target=\"_blank\" rel=\"noopener nofollow\">PYMNTS coverage<\/a> on the Financial Stability Board and Bank for International Settlements warned that financial authorities should monitor how generative AI alters risk models and governance frameworks. Policymakers are also weighing these dynamics at the national level. A <a class=\"editor-rtfLink\" href=\"https:\/\/www.pymnts.com\/news\/banking\/2025\/hill-hearing-spotlights-ais-promise-and-pitfalls-in-transforming-banking\/\" target=\"_blank\" rel=\"noopener nofollow\">House hearing on AI in banking<\/a> earlier this year highlighted both the promise of automation and the risk of bias and opacity, with lawmakers urging stronger oversight as financial institutions expand their AI budgets.<\/p>\n<p>\u00a0\u201cIt can be very difficult to gain a customer\u2019s trust, but then, once they\u2019ve given you the privilege of holding their money or lending credit to them, you have to keep that trust,\u201d <a class=\"editor-rtfLink\" href=\"https:\/\/www.linkedin.com\/in\/melissadouros\/\" target=\"_blank\" rel=\"noopener nofollow\">Melissa Douros<\/a>, chief product officer at Green Dot, <a class=\"editor-rtfLink\" href=\"https:\/\/www.pymnts.com\/artificial-intelligence-2\/2025\/banks-find-trust-is-still-the-hardest-algorithm-to-crack\/\" target=\"_blank\" rel=\"noopener nofollow\">told PYMNTS<\/a>. She emphasized that financial services firms cannot afford to treat AI as a \u201cblack box\u201d or obscure how models operate. \u201cWe should be able to expose how we\u2019re using [AI], what\u2019s the data that\u2019s being ingested and what\u2019s being spit out at any time anyone asks, especially a regulator,\u201d she added. <\/p>\n","protected":false},"excerpt":{"rendered":"Until now, large language models have relied on human-led retraining to adjust their reasoning and update the parameters&hellip;\n","protected":false},"author":2,"featured_media":79066,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,43,262,44686,55466,125],"class_list":{"0":"post-79065","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-news","14":"tag-pymnts-news","15":"tag-seal","16":"tag-self-learning-ai","17":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/79065","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=79065"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/79065\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/79066"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=79065"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=79065"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=79065"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}