{"id":173908,"date":"2025-09-27T23:44:06","date_gmt":"2025-09-27T23:44:06","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/173908\/"},"modified":"2025-09-27T23:44:06","modified_gmt":"2025-09-27T23:44:06","slug":"ucla-scientists-use-light-to-create-energy-efficient-generative-ai-models","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/173908\/","title":{"rendered":"UCLA scientists use light to create energy-efficient generative AI models"},"content":{"rendered":"<p>Artificial intelligence has dazzled the world with its ability to create pictures, words, and even music from scratch. But behind the magic lies a hidden cost. Training and running today\u2019s most advanced generative AI systems consumes massive amounts of electricity, creates significant carbon emissions, and uses up vast amounts of water to cool sprawling data centers. The question is whether this technology, for all its wonders, can remain sustainable as demand grows.<\/p>\n<p>A team of researchers at the <a href=\"https:\/\/www.ucla.edu\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">UCLA Samueli School of Engineering<\/a> believes they may have found an answer\u2014one that trades the energy-hungry churn of supercomputers for the elegance and speed of light. Their new optical generative model uses photonics to create images in ways that could dramatically reduce the environmental footprint of AI while keeping performance high.<\/p>\n<p>Instead of relying on billions of digital calculations to piece together a picture, this approach lets light itself handle most of the work. \u201cOur work shows that optics can be harnessed to perform generative AI tasks at scale,\u201d said Aydogan Ozcan, senior author of the study. \u201cBy eliminating the need for heavy, iterative digital computation during image inference, optical generative models like ours open the door to snapshot, energy-efficient AI systems that could transform everyday technologies.\u201d<\/p>\n<p>This figure contains an AI-generated schematic of a multi-color optical generative model. (CREDIT: Ozcan Lab\/UCLA) How It Works<\/p>\n<p>At the heart of the setup is a simple yet ingenious partnership between a small digital encoder and an optical decoder. The digital part transforms random noise into a \u201cphase map,\u201d which is then displayed on a spatial light modulator. That map tells the light how to bend, scatter, or shift as it passes through the system. Once the light moves through a specially designed optical decoder, an image appears on a sensor\u2014whether it\u2019s a handwritten number, a butterfly, or a portrait in the style of <a href=\"https:\/\/www.thebrighterside.news\/post\/x-rays-ai-and-3d-printing-bring-lost-van-gogh-artwork-to-life\/\" rel=\"nofollow noopener\" target=\"_blank\">Vincent van Gogh<\/a>.<\/p>\n<p>Because the heavy lifting is done by the physics of light rather than by electronic circuits, this process happens astonishingly fast. The optical stage itself takes less than a nanosecond, with the only real bottleneck being how quickly the light modulator can refresh its pattern. The researchers call this \u201csnapshot generation,\u201d because a complete image is created in a single burst of light.<\/p>\n<p>The team also built an iterative version of their system that mimics the way popular digital diffusion models refine images step by step. This approach avoids problems like \u201cmode collapse,\u201d where models get stuck generating the same few patterns over and over. The optical iterative models produced more diverse results without giving up efficiency.<\/p>\n<\/p>\n<p>Putting the Model to the Test<\/p>\n<p>The researchers didn\u2019t stop at theory. They built a working optical system and put it through a series of experiments across well-known datasets. The model generated black-and-white images of handwritten digits from the MNIST dataset, clothing items from Fashion-MNIST, and more complex pictures like butterflies and human faces.<\/p>\n<p>They measured performance using two key metrics: Inception Score, which tracks diversity and quality, and Fr\u00e9chet Inception Distance, which measures how close generated images are to real ones. For simpler datasets, the optical models performed competitively against digital ones. In one experiment, a classifier trained only on optically generated digits still reached 99.18% accuracy\u2014just 0.4% less than training on the real thing.<\/p>\n<p>In color experiments, the team used three different wavelengths of light for red, green, and blue. This allowed them to generate full-color images of butterflies and faces. Failures, where <a href=\"https:\/\/www.thebrighterside.news\/post\/new-study-reveals-that-noise-can-strengthen-quantum-entanglement\/\" rel=\"nofollow noopener\" target=\"_blank\">noise overwhelmed the signal<\/a>, were rare\u2014only about 3% for butterflies and 7% for faces.<\/p>\n<p>Another important measurement was diffraction efficiency, or how much of the input light contributes to the final image. Using a single-layer optical decoder, they reached about 42% efficiency. Adding more decoding layers raised that number to about 50% while maintaining solid image quality. In other words, half the incoming light was put to work in creating the picture.<\/p>\n<p>Experimental demonstration of snapshot optical generative models. (CREDIT: Nature) Challenges Along the Way<\/p>\n<p>As with any new technology, the <a href=\"https:\/\/www.thebrighterside.news\/post\/holograms-and-ai-create-an-uncrackable-optical-encryption-system\/\" rel=\"nofollow noopener\" target=\"_blank\">optical model<\/a> faces real-world hurdles. Precision matters: small misalignments, imperfections in the optics, and limits in how finely the light phase can be controlled all affect results. To get around these issues, the team trained their models with hardware constraints in mind, making sure that what worked in theory would also succeed in practice.<\/p>\n<p>They also suggested future designs that could replace bulky spatial light modulators with thin, passive optical surfaces created using nanofabrication. These could make the system cheaper, more compact, and easier to integrate into everyday devices.<\/p>\n<p>Another intriguing possibility is parallel image generation, where multiple patterns are created at once using different wavelengths or spatial channels. The researchers also see potential in generating 3D images, a feature that could bring new life to augmented and virtual reality.<\/p>\n<p>Toward Sustainable AI<\/p>\n<p>What makes this development especially exciting is its promise to reduce the <a href=\"https:\/\/www.thebrighterside.news\/post\/ultra-thin-optical-chip-moves-data-energy-efficiently-at-record-speeds\/\" rel=\"nofollow noopener\" target=\"_blank\">environmental burden of AI<\/a>. Traditional generative systems demand supercomputers running for hours, if not days, to create high-quality results. These machines not only consume enormous amounts of power but also require water-intensive cooling systems.<\/p>\n<p>Numerical and experimental results of a higher-resolution snapshot optical generative model for monochrome Van Gogh-style artwork generation compared against the teacher digital diffusion model with 1,000 steps. (CREDIT: Nature) <\/p>\n<p>By shifting the generative process into the optical domain, the UCLA team\u2019s method sidesteps much of that demand. In one demonstration, their optical system recreated Van Gogh-style artwork in a single step per color channel, compared with 1,000 steps needed by a digital diffusion model. The images were visually comparable, yet the energy cost was only a fraction of the traditional method.<\/p>\n<p>The team also points out that their model can add layers of security. Different light wavelengths can encode different patterns that only a matched decoder can reconstruct. This physical \u201ckey-lock\u201d mechanism could <a href=\"https:\/\/www.thebrighterside.news\/post\/groundbreaking-silicon-chip-unlocks-the-potential-of-6g-communications\/\" rel=\"nofollow noopener\" target=\"_blank\">secure communications<\/a>, protect against counterfeiting, and personalize digital content in ways that are difficult to hack.<\/p>\n<p>Practical Implications of the Research<\/p>\n<p>The future possibilities of light-based AI extend beyond efficiency. Compact, low-power optical models could be embedded in smart glasses, augmented reality headsets, or mobile devices. They could enable real-time AI without draining batteries or requiring constant cloud connections.<\/p>\n<p>Beyond consumer gadgets, the approach has clear potential in biomedical imaging, diagnostics, and secure data transmission. Optical models could help hospitals analyze data faster with less energy, or allow researchers to run large experiments without the environmental costs of <a href=\"https:\/\/www.thebrighterside.news\/post\/breakthrough-dna-based-supercomputer-runs-100-billion-tasks-at-once\/\" rel=\"nofollow noopener\" target=\"_blank\">massive computer clusters<\/a>.<\/p>\n<p>Most importantly, this technology shows a path toward scaling AI in a way that doesn\u2019t come at the expense of the planet. By letting light take over some of the thinking, the research points toward a future where powerful AI and sustainability go hand in hand.<\/p>\n<p>Research findings are available online in the journal <a href=\"https:\/\/www.nature.com\/articles\/s41586-025-09446-5\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Nature<\/a>.<\/p>\n<p>Related Stories<\/p>\n","protected":false},"excerpt":{"rendered":"Artificial intelligence has dazzled the world with its ability to create pictures, words, and even music from scratch.&hellip;\n","protected":false},"author":2,"featured_media":173909,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[256,254,64,63,257,113065,20764,51320,337,128,6538,105],"class_list":{"0":"post-173908","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-computing","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-au","11":"tag-australia","12":"tag-computing","13":"tag-green-good-news","14":"tag-light","15":"tag-photonics","16":"tag-research","17":"tag-science","18":"tag-supercomputer","19":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/173908","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=173908"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/173908\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/173909"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=173908"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=173908"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=173908"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}