You are sitting across from a colleague at a table. Though you’re both wearing virtual reality headsets, you can see the room around you—it’s just that on the table in front of you floats a cluster of digital objects, either red spheres or blue cubes. This is what mixed reality does: It blends physical environments and digital objects, allowing the digital and the physical to coexist and interact.

Now, your colleague sitting across the table points to a blue sphere on the left. But you don’t see a sphere there, you see a cube. The conversation falters, and you wonder, are they mistaken? Are you? Or is the technology showing two different realities at once?

Mixed reality lets people collaborate in shared digital spaces layered onto the physical world, but the very flexibility that makes these systems powerful also raises the question of what happens when people in the same physical environment are not seeing the same reality?

Recent research enacting this scenario reveals an answer that informs us about how humans build a shared understanding of the world.

The Invisible Contract of Shared Reality

Human communication depends on something we may take for granted: common ground.

When we talk, gesture, or point, we assume others perceive the same environment we do. This shared perception lets us coordinate actions, solve problems, and tell stories together. In mixed reality, that assumption gets more complicated.

Unlike virtual reality, which replaces the entire environment, mixed reality overlays digital content onto the physical world. Two people may stand in the same room and look at the same table, yet the virtual objects on that table may differ depending on personalization features or technical glitches.

The result introduces a fracture: While the physical world is shared, there are potentially different virtual ones. These mismatches are referred to as perceptual conflicts, or situations where users believe they are referring to the same object but are actually seeing different things. It’s a small disturbance, but it can ripple through interactions in many ways.

To explore how these conflicts affect interaction, researchers invited pairs of participants to interact in mixed reality. Sitting across from one another, they wore headsets that displayed nine virtual objects representing cubes and spheres arranged on a table. At first, both participants saw the same arrangement, but over time, the system began swapping the positions of some objects for only one participant.

Each person first memorized the objects individually. Then the pair discussed what they had seen and tried to agree on a final answer. While for participants it may be just a memory task, the experiment aimed at understanding how humans negotiate conflicting realities.

Researchers observed that as the differences between participants’ views increased, several changes emerged in their interaction.

First, the partners became less synchronized. Their head movements and gestures, which are normally aligned during conversation, drifted apart when their perceptions diverged.

Second, participants were more likely to change their answers during discussion when perceptual conflicts were larger.

Third, the disagreements came with a mental cost, with participants reporting higher cognitive load (i.e., the sense that the task required more mental effort).

While these perceptual conflicts did not reduce trust in either the partner or the technology itself, the damage manifested in participants growing less confident in the shared outcome they reached together.

In other words, the conflict made them doubt the process of thinking together.

This is important because communication is a continuous negotiation of shared meaning, a process built on assumptions about what others perceive. Mixed reality reveals how fragile that shared cognitive space can be.

A Crack in the Collective Mind

Today, mixed reality is moving rapidly from research labs into everyday life. Companies are developing software designed for meetings, training, collaborative problem-solving, and more.

These systems are often personalized, adjusting visuals based on user preferences. In one scenario, a system might hide distracting objects from a user’s view. In another, it might highlight different pieces of information depending on the person’s role or expertise.

This comes with a paradox: The more tailored each user’s experience becomes, the more likely it is that collaborators will inhabit slightly different worlds.

Personalized settings may seem helpful, but they may also undermine the invisible infrastructure that makes collaboration possible

Mixed reality was designed to blend digital worlds with the physical one. Ironically, it may teach us just as much about the social architecture of human cognition.

When two people stand in the same room but see slightly different realities, the conversation bends, adapts, and strains under the effort of rebuilding shared meaning. And in that strain, we have a chance to observe that reality is rarely a solitary experience.