This week, Disneyland Paris unveiled the next-generation Olaf robot, one of Disney Imagineering’s most ambitious technological breakthroughs: the animated snowman takes the form of a fully expressive, physical character, UNN reports.
Details
The presentation was given by Bruce Vaughn, President and Chief Creative Officer of Walt Disney Imagineering (WDI), and Natacha Rafalski, President of Disneyland Paris.
This moment marks a new chapter where advancements in robotics, artificial intelligence, and simulation merge with Disney’s storytelling traditions, bringing iconic characters to life in the real world.
Olaf’s appearance was a continuation of a new episode of WDI’s R&D project, We Call It Imagineering, which details the technologies shaping Disney’s future.
The film also reflects years of behind-the-scenes collaboration between engineers, animators, and AI researchers working to create characters that feel as alive as their animated counterparts.
At its core is a simple idea: make the technology disappear and let the emotions shine.
Disneyland is updating Galaxy’s Edge: a new character will appear29.03.25, 15:12 • 183696 views
True Motion Animation
Kyle Laughlin, Senior Vice President of Walt Disney Imagineering Research & Development, described the approach: “Like everything at Disney, we always start with the story. We think about what feelings we want to evoke in the guest.”
This philosophy underpinned Olaf’s transformation from a digital creation into a real character capable of eye contact, stylized movements, and dialogue.
Every one of his gestures, and even his snowy shimmer, was created to match what audiences know from the films. Iridescent fibers catch the light like real snow, and a deforming “snow” suit allows Olaf to move in ways that robotic shells typically cannot.
But unlike the BDX droids from Star Wars, which already roam Disney parks, Olaf required a different level of movement realism.
As Laughlin noted: “The key technology in our platform is deep reinforcement learning, which allows robotic characters to learn to mimic artist-defined movements through simulation.”
This combination of art and AI allows engineers to iterate quickly, refining the gait, style, and personality until Olaf moves exactly as the animators intended.
AI – the driving force of magic
To scale this process, WDI is developing Newton — an open-source framework created in collaboration with NVIDIA and Google DeepMind.
Laughlin defines it as a system where “building blocks allow for rapid development of GPU-accelerated simulators.”
One of the key components, a simulator called Kamino, increases the speed of robot learning. With it, characters like Olaf can master complex movements — walking, gesturing, interacting — much faster.
These breakthroughs help transform animated, often physically impossible movements into compelling real-world performances.
Olaf’s fully articulated mouth, expressive eyes, removable carrot nose, and conversational abilities are supported by these layers of AI-learned movements.
And the process continues to accelerate. “What’s so exciting is that we’re just getting started,” Laughlin said. The rapid evolution from BDX droids to self-balancing H.E.R.B.I.E.s, and now to Olaf, shows how quickly Disney can now prototype and release new characters.
Olaf will soon meet guests at the upcoming Arendelle Bay show in Disneyland Paris’s Frozen Park, as well as for a limited time in Hong Kong Disneyland’s Frozen Park.