Meta distilled its full-body Codec Avatars tech to render 3 at once on Quest 3 standalone, with some notable tradeoffs.

For around a decade now, Meta has been researching and developing the technology it calls Codec Avatars, photorealistic digital representations of humans driven in real-time by the face and eye tracking of VR headsets. The highest-quality prototype achieves the remarkable feat of crossing the uncanny valley, in our experience.

The goal of Codec Avatars is to deliver social presence, the subconscious feeling that you’re truly with another person, despite them not physically being there. No flatscreen technology can do this. Video calls don’t even come close.

To eventually ship Codec Avatars, Meta has been working on increasing the system’s realism and adaptability, reducing the real-time rendering requirements, and making it possible to generate them with a smartphone scan.

For example, last week we reported on Meta’s latest progress on highly realistic head-only Codec Avatars that can be generated from a selfie video of you rotating your head, plus around an hour of processing on a server GPU. This has become possible thanks to Gaussian splatting, which in recent years has done for realistic volumetric rendering what large language models (LLMs) did for chatbots.

Meta’s Photorealistic ‘Codec Avatars’ Now Have Changeable Hairstyles

Meta’s prototype photorealistic ‘Codec Avatars’ now support changeable hairstyles, separately modeling the head and hair.

But that system was still designed to run on a powerful PC graphics card. Now, Meta researchers have figured out how to get their full-body Codec Avatars running in real-time on Quest 3.

In a paper called “SqueezeMe: Mobile-Ready Distillation of Gaussian Full-Body Avatars”, the researchers describe how they distilled their full-body photorealistic avatars to run on a mobile chipset, leveraging both the NPU and GPU.

You may have heard the term distillation in the context of LLMs, or AI in general. It refers to using the output of a large, computationally expensive model to train a much smaller model. The idea is that the small model can replicate the larger model efficiently, with minimal quality loss.

The researchers say SqueezeMe can render 3 full-body avatars at 72 FPS on a Quest 3, with almost no quality loss compared to the versions rendered on a PC.

However, there are a couple of key tradeoffs to note.

These avatars are generated using the traditional massive custom capture array of more than 100 cameras and hundreds of lights, not the new ‘universal model’ smartphone-scan approach of Meta’s other recent Codec Avatars research.

They also have flat lighting, and do not support dynamic relighting. This support is a flagship feature of Meta’s latest PC-based Codec Avatars, and would be crucial for making them fit into VR environments and mixed reality.

Still, this research is a promising step towards Meta eventually shipping Codec Avatars as an actual feature of its Horizon OS headsets.

Public pressure for Meta to ship what it has been researching for a decade has built up significantly this year as Apple is shipping its new Personas in visionOS 26, effectively delivering on Meta’s promise.

However, neither Quest 3 nor Quest 3S have eye tracking or face tracking, and there’s no indication that Meta plans to imminently launch another headset with these capabilities. Quest Pro had both, but was discontinued at the start of this year.

Meta Connect 2025 Takes Place September 17 & 18

Meta Connect 2025 will take place on September 17 and 18, promising to “peel back the curtain on tomorrow’s tech”. Here’s what we expect might be announced.

One possibility is that Meta launches a rudimentary flatscreen version of Codec Avatars with AI simulated face tracking first, to let you join WhatsApp and Messenger video calls with a more realistic form than your Meta Avatar.

Meta Connect 2025 will take place from September 17, and the company might share more about its progress on Codec Avatars then.