At Meta’s recent Connect developer conference, CEO Mark Zuckerberg unveiled the long-awaited Meta Ray-Ban Display glasses. Developed in partnership with eyewear giant EssilorLuxottica, the device resembles a classic pair of Ray-Ban Wayfarers — but behind the lenses sits a compact computing stack. Cameras and microphones capture photos, video and audio, while an augmented reality (AR) display projects information directly into the wearer’s field of view. An accompanying neural-sensing wristband detects subtle electrical signals from the user’s muscles, translating finger twitches and hand gestures into commands.

Meta pitches the device as a leap beyond its earlier Ray-Ban Stories — which amounted little more than a camera for social media — towards something closer to a smartphone worn on your face. The promise, as Meta would have it, is frictionless interaction with the interface and the company’s family of apps: just flick your fingers in mid-air to send a message, answer a call, or scroll Instagram.

But the launch fell flat. Zuckerberg’s demo was marred by dropped video calls and unresponsive AI assistants. Meta has been quick to explain the technical reasons behind some of the issues seen at the launch event and promise that the final product will be free from similar bugs. But regaining trust and rebuilding hype after the rocky event is a difficult task — one which reflects the company’s rocky history with AR technologies.

A costly history

Over the last decade, Meta has positioned AR and virtual reality (VR) as central to its future, predicting that these technologies will have an impact comparable to the mobile computing revolution of the past decade. This vision was recently crystallised in the way the company framed its social software and hardware under the banner of the “metaverse”, an imaginary of the seamless blending of the real and virtual worlds.

The glasses arrive after more than a decade of investment in Meta’s Reality Labs division — the company’s research and development division for virtual and augmented reality products. Since acquiring Oculus in 2014, Meta has poured an estimated US$80 billion into virtual and augmented reality, including acquisitions like neural computing company CTRL-Labs. The results: just two million Ray-Ban Stories sold, tens of millions of Quest headsets, and billions in ongoing quarterly losses.

Ray-Ban smart glasses are displayed during a media preview in 2022

Ray-Ban smart glasses are displayed during a media preview at the Meta Store in Burlingame, California, on 4 May 2022. (Photo by Justin Sullivan / Getty Images)

Where that money has gone remains somewhat opaque, though prototypes offer hints. Alongside Oculus headsets and Ray-Ban collaborations, Meta has unveiled experimental AR technologies such as Orion, Ego4D and Project Aria. While never commercially released, these prototypes reveal Meta’s ambition: glasses that combine AR overlays with advanced AI systems. Ego4D, for example, was pitched to researchers as a tool for teaching AI to “see” from a human perspective. (AI is, if anything, an even larger sinkhole for Meta’s capital expenditure. Meta has projected to spend up to US$72 billion on AI infrastructure in 2025 alone.)

But smart glasses have always been a tricky sell. Google Glass, launched in 2013, was an abject commercial failure, dogged by privacy concerns so severe that some San Francisco bars banned anyone wearing them and residents at times reacted with violence.

Today, Meta’s Ray-Ban glasses occupy a similar position of marginality: financially, they are a rounding error next to the company’s vast advertising empire, which still generates nearly all of its revenue; and for EssilorLuxottica, they are barely a blip within a multibillion-dollar eyewear portfolio spanning luxury labels like Prada and Armani to mass-market outlets like Sunglass Hut.

Yet, as with Google Glass, their symbolic significance far exceeds their immediate sales, pointing to the larger stakes of who controls the future of vision.

The uses — and the risks

On the surface, the glasses’ uses seem benign: hands-free calls, short videos, real-time notifications. Yet glasses that discreetly record raise surveillance risks, not only for wearers but for everyone around them.

These risks are not hypothetical. Since the Ray-Ban Stories launched in 2021, numerous reports have surfaced of women being recorded without consent by influencers and “lifestyle coaches” in places like Sydney’s King’s Cross, Bondi Beach and even the University of Sydney campus. As we found in our current research on immersive technology, the gendered nature of these incidents has earned the devices nicknames like “creeper glasses” and “stalker glasses”.

Meta points to a blinking white light as an indicator when recording is active, but online forums are full of instructions on how to disable it. Australia’s fragmented privacy laws exacerbate the problem. With no overarching federal right to privacy, victims often have little recourse against covert recording.

Want the best of Religion & Ethics delivered to your mailbox?

Sign up for our weekly newsletter.

Concerns extend beyond interpersonal harms. In the United States, an ICE agent was spotted using the glasses during an immigration raid, raising alarms about state surveillance. This is part of a broader trajectory in which AR (and VR) technologies extend the disciplinary arm of the state, whether through military deployments or the integration of facial recognition-enabled AR into policing. Elsewhere, in 2024, two Harvard students demonstrated how pairing the glasses with facial recognition software could instantly “dox” strangers — a project that later became the foundation of their startup, Halo, now a direct competitor to Meta in the AI-glasses market.

As critics warned when Ray-Ban Stories originally launched, the danger is not just that users put Facebook (or Meta) on their own faces. It’s that everyone else must live with the consequences, often without consent.

A fantasy of enclosure

If the state wields AR to discipline, corporations like Meta deploy it to capture and enclose. The company’s history — from Cambridge Analytica to teen-targeted advertising — raises serious doubts about whether it can be trusted with such an intimate, data-intensive device. Now, with Meta’s aggressive pivot to AI and its integration of new models into the advertising engine, a key question emerges: to what extent are the inputs captured through these glasses (whether they be bodily gestures or patterns of everyday engagement with the world) being repurposed to train and refine those very AI systems?

Almost a decade ago, filmmaker Keiichi Matsuda’s Hyper-Reality imagined a cityscape saturated with overlays of ads, gamified points and algorithmic nudges. What once seemed dystopian, and something of an on-the-nose social commentary on digital capitalism, now feels uncannily prescient. Meta’s smart glasses are not just another consumer gadget; they are a step toward enclosing perception itself within the infrastructure of a private platform.

This is the fantasy of enclosure that has come to characterise the private internet, and which is literalised through technologies like VR and AR — an internet no longer open and distributed, but owned and mediated by a handful of corporations, with every layer (hardware, operating system, apps and data) folded into one ecosystem. When the interface is your glasses, there is no “outside” to Meta’s platform.

Ryan Stanton is a research associate at the University of Sydney.

Ben Egliston is a Senior Lecturer in Digital Cultures at the University of Sydney and an Australian Research Council DECRA Fellow.

Marcus Carter is Professor in Human–Computer Interaction at the University of Sydney and an Australian Research Council Future Fellow.

Joanne Gray is a Senior Lecturer in Digital Cultures at the University of Sydney.

Wenqi Tan is a PhD candidate in media and communication at the University of Sydney.

Posted 12m ago12 minutes agoFri 3 Oct 2025 at 4:44am, updated 4m ago4 minutes agoFri 3 Oct 2025 at 4:52am