
Meta Ray-ban AI glasses are on display in San Francisco, California, July 12, 2025. (Photo by Smith Collection/Gado/Getty Images)
Gado via Getty Images
Real time visual context from outward facing cameras and location awareness is pushing AR glasses toward a tipping point. Meta’s “look and ask” feature shows what happens when a wearable understands where you are and what you are looking at. Previously users had to take a photo, and then query the photo. Now, the computer vision is live. Listening and watching. The glasses can turn any street, storefront or sign into a searchable interface. You can look at a restaurant and ask for a review or ask for the cheapest downtown parking lot. You can glance at a storefront and get reviews and menus in real time. Google is pursuing the same idea in its Gemini powered Maps and Live View layers, where the phone camera acts as a query field for the physical world. Meta’s glasses and Google’s mobile system are moving toward the same outcome, ambient intelligence that attaches context to place and makes visual search an everyday experience.
Steam revealed several new products this week, including the new Valve cube mini-PC and new, lightweight, wireless Frame headset.
Steam
Valve also stepped back into hardware territory this week. Reports from Skarredghost and Road to VR confirm that the company is preparing a headset called the Steam Frame along with a compact PC called the Steam Cube. The headset runs SteamOS on a Snapdragon based platform with dual LCD panels at 2160 by 2160 per eye. Valve says it will support wireless streaming from a full Steam library, meaning PC VR experiences without a tether. The Cube is a small form factor SteamOS machine that can act as a companion for desktop and living room. Pricing is not public. Valve says the headset will be below the cost of the Index headset, with release planned for early 2026. The move gives PC VR developers a clearer hardware target and signals that Valve still sees a future for high performance VR even as the rest of the industry tilts toward mixed reality.
KARLOVY VARY, CZECH REPUBLIC – AUGUST 20: Sir Michael Caine is awarded with the Crystal Globe for Outstanding Contribution to World Cinema at the 55th Karlovy Vary International Film Festival opening on August 20, 2021 in Karlovy Vary (Karlsbad), Czech Republic. This Film Festival is the largest in the Czech Republic and runs from August 20th to 28th. (Photo by Gisela Schober/Getty Images)
Getty Images
ElevenLabs added another piece to the week with the launch of its Iconic Voice Marketplace. The company is formalizing the licensing of AI generated voices from well known actors and public figures. Current signed talent includes Michael Caine. The system provides a performer-first rights structure with approvals and commercial terms for use in ads, narration and interactive content. This places synthetic voice work on firmer legal ground and begins to move the industry away from the wild cloning era that caused so much concern among performers. It also gives creators in XR, gaming and immersive media a controlled way to add recognizable voices to projects without long production cycles, provided the rights holders have opted in.
Who owns the dreams of an AI machine? Sora 2 begs the question, and the courts are already answering it. I felt the split myself when I made two fifteen second shorts. One was a scripted sci fi western built from detailed prompts. The other was a loose request to revisit my childhood that Sora turned into a personal homage to Back to the Future, complete with a glowing portal and parents from another timeline. One felt authored. One felt delivered. Judges in the Anthropic and Getty cases are drawing the same line, treating control as the measure of ownership. That standard is starting to shape the creative economy.
World Labs and Escape.ai teamed up to transform traditional films into immersive 3-D spaces. Using Escape.ai’s video-intelligence engine to extract key frames and Marble (World Labs’ world-generation API) to build Gaussian-splat based geometry, the pipeline auto-creates an explorable 3-D environment and embeds the film inside it for a hybrid viewing and walking experience. For filmmakers the collaboration lowers the barrier to immersive companion-spaces; for viewers it turns a film into a place. The effort proves that spatial cinema — film plus 3-D world in real time — is operational and ready for scaling.
This column has a companion, The AI/XR Podcast, hosted by its author, Charlie Fink, and Ted Schilowitz, former studio executive and futurist for Paramount and Fox, and Rony Abovitz, founder of Magic Leap. This week’s guests are Lamina 1 co-founders Rebecca Barkin and author Neal Stephenson. We can be found on Spotify, iTunes, and YouTube.
Corrected, Nov. 13, 2:42 pm. An earlier version of this article inaccurately stated that Matthew McConaughey was part of Eleven Labs’ Voice Marketplace. References to the actor have been therefore been removed.