Thus the wave function can’t tell us what the quantum system is like before we measure it. By contrast, in macroscale, classical, Newtonian physics, things have well-defined properties and positions, even when no one is looking. The classical and quantum worlds seem divided by what Heisenberg in the late 1920s called a “cut.” For him and Niels Bohr in Copenhagen, reality had to be described by classical physics, while quantum mechanics was the theory that we, as classical entities ourselves, needed to describe what we observed about the microscopic world. Nothing more, nothing less.

But why should there be two distinct types of physics — classical and quantum — for big and small things? And where and how does one take over from the other? To Bohr and his colleagues, the scale of atoms and that of people seemed so profoundly disparate that the question didn’t seem to matter much. In any case, they said, we have some choice over where we place the cut, depending on what we decide to include in our quantum equations. But today we can probe the world over many length scales, including the in-between mesoscale of, say, a few nanometers, where it’s not clear whether quantum or classical rules should apply. And in fact we can still — if the experiments are controlled and sensitive enough — find quantum behavior in objects big enough to be seen with an ordinary optical microscope. So there’s no avoiding the problem of how to explain the quantum-to-classical transition — the “becoming real” that seems to happen when we zoom out or make a measurement.

Quantum mechanics itself didn’t seem to explain this measurement process, in which all the quantum probabilities represented in the wave function “collapse” into a single observed value. For Bohr and his colleagues in Copenhagen, the collapse was just figurative: a reflection of the classical world we experience. Others have tried to explain the collapse as a real, spontaneous, randomly timed physical event that picks out a unique outcome from among the many possibilities — although just what factors would cause such a physical collapse are unclear. Others invoke the description postulated by Louis de Broglie and later developed by David Bohm, in which a particle does have well-defined properties, but it is steered by a mysterious “pilot” wave that produces the strange wavelike behavior of quantum objects, such as interference. And others have adopted Hugh Everett’s 1957 interpretation, now commonly called “many worlds,” which supposes that there is no collapse, but that all measurement outcomes are realized in parallel universes, so that reality is constantly branching into multiple, mutually inaccessible versions of itself.

Michael Waraksa for Quanta Magazine

All this has always struck me as fanciful. Why not just see how far we can get with conventional quantum mechanics? If we can explain how a unique classical world arises out of quantum mechanics using just the formal, mathematical framework of the theory, we can dispense with both the unsatisfactory and artificial cut of Bohr’s “Copenhagen interpretation” and the arcane paraphernalia of the others.

This is where Zurek’s work comes in. Starting in the 1970s, he and the physicist H. Dieter Zeh looked closely at what quantum theory itself tells us about measurements. (This might have happened much sooner if researchers had not been discouraged for decades from asking questions about these foundational but unresolved issues in the theory, on the grounds that it was all just pointless philosophy.)

The central element of Zurek’s approach is the phenomenon called quantum entanglement, another of the nonintuitive things that happen at quantum scales. Schrödinger named this phenomenon in 1935, arguing that it is in fact the key feature of quantum mechanics. He came up with the name after Albert Einstein and colleagues pointed out that, after two quantum particles come into contact via physical forces, they appear to be weirdly interconnected; if you measure one of them, it looks like you instantaneously influence the properties of the other, even if they’re no longer close together. “Looks like” is the essential term here: Actually, quantum mechanics says that the interaction and resulting entanglement renders the particles no longer separate entities. They are described by a single wave function that defines the possible states of both particles. For instance, the joint wave function might say that whichever direction one of them is magnetically oriented, the other must be oriented in the opposite direction.

Ultimately, the arguments over quantum mechanics have much bigger stakes: what reality is.

When particles interact, entanglement is inevitable. This means something for the measurement process: The quantum objects under observation become entangled with the atoms of the measuring instrument. “Measurement” here doesn’t have to imply probing the object with some fancy bit of scientific kit; it applies to any quantum object interacting with its environment. The molecules in an apple are described by quantum mechanics, and photons of light bouncing off the surface molecules get entangled with them. Those photons carry information about the molecules to your eyes — say, about the redness of the apple’s skin, which stems from the quantum energy states of the molecules that constitute it.

In other words, Zurek and Zeh realized, entanglement is ubiquitous, and it is the information conduit between quantum and classical. As a quantum object interacts with its environment, it becomes entangled with it. Using nothing but regular quantum math, Zeh and Zurek showed that this entanglement “dilutes” the quantumness of the object because it becomes a shared property with the entangled environment, so that quantum effects quickly become unobservable in the object itself. They call this process decoherence. For example, a superposition of the quantum object becomes spread out among all its environmental entanglements, so that to deduce the superposition we’d need to examine all the (rapidly multiplying) entangled entities. There’s no more hope of doing that than there is of reconstructing a blob of ink once it has dispersed in the ocean.

A triptych of portraits of physicists.

Wojciech Zurek (top) has worked for decades to close the quantum-classical divide, with collaborators Jess Reidel (bottom left) and the late H. Dieter Zeh (bottom right).

Courtesy of Wojciech Zurek; Rod Searcey; Rolf Kickuth via Wikimedia Commons

Decoherence happens incredibly fast. For a dust grain floating in the air, collisions with photons and surrounding gas molecules will produce decoherence in about 10-31 seconds — about a millionth of the time it takes for light to traverse a single proton. In effect, decoherence destroys delicate quantum phenomena almost instantly once they encounter an environment.

But measurement is not just about decoherence. It is entanglement with the environment that imprints information about the object on that environment — for example in a measuring device. For the past two decades or so, Zurek has been working out how that happens. It turns out that some quantum states have mathematical features that allow them to generate multiple imprints on the environment without being blurred into invisibility by decoherence. These states thus correspond to properties that “survive” into the observable, decohered classical world.

This is possible because the interactions that generate each imprint retain the quantum system in the state it had before the interaction, rather than knocking it into a different state or mixing it up with others. Photons, for example, can bounce off an atom and carry off positional information about it without changing the quantum state of the system.

Zurek calls these robust states “pointer states,” because they are the ones that can cause the needle in a measuring device to point to a particular outcome. Pointer states correspond to properties that are classically observable, such as position or charge. Quantum superpositions, meanwhile, don’t have this property; they can’t generate copies robustly, and so we can’t observe them directly. In other words, they aren’t pointer states.

Zurek shows that pointer states can be efficiently and robustly imprinted again and again in the environment. Such states are the “fittest,” he told me. “They can survive the process of copying, and so the information about them can multiply.” They are, by analogy with Darwinian evolution, “selected” for translation to the classical world because they are good at becoming amplified — replicated, you could say — in this way. This is the “quantum Darwinism” of Zurek’s book title.

These imprints multiply extremely quickly. In 2010, Zurek and his collaborator Jess Riedel calculated that within a microsecond, photons from the sun will imprint the location of a grain of dust about 10 million times.

Zurek’s theory of quantum Darwinism — which, again, uses nothing more than the standard equations of quantum mechanics applied to the interaction of the quantum system and its environment — makes predictions that are now being tested experimentally. For example, it predicts that most of the information about the quantum system can be gleaned from just a very few imprints in the environment; the information content “saturates” quickly. Preliminary experiments confirm this, but there’s more to be done.

Each imprint, as we’ve seen, corresponds to a classical observation: something we can consider an element of our reality. The electron is magnetically oriented upward, say, in this imprint. But isn’t it conceivable, because the original quantum state contains probabilities of different outcomes, that one imprint might correspond to “up” and another to “down,” so that different observers see different realities — not a superposition exactly, but a clear consequence of it in the form of multiple versions of classical reality?

This leads us to another revelation of decoherence theory, the one that persuades me that Zurek’s theory now tells a complete story. It predicts that all the imprints must be identical. Thus, quantum Darwinism insists that a unique classical world can and must emerge from quantum probabilities. This imposition of consensus obviates the rather mysterious and ad hoc process of collapse, in favor of something more rigorous. The object being observed, surrounded by a cloud of identical, observable imprints of it in its macroscopic environment, forms an element of “relatively objective existence,” as Zurek puts it. It becomes a part of our concrete classical reality, which he calls an extanton.

This is where the theory promises to dissolve disputes about interpretation. Zurek says that it achieves what might have seemed impossible: a reconciliation of the Copenhagen and many-worlds interpretations. In the former, the wave function is considered epistemic: It describes what we can know about the quantum world. In the latter, the wave function is ontic: It is the ultimate reality — a description of all branches of reality at once — even though we can only ever experience one branch of this quantum multiverse. Zurek says the wave function is actually both. “The two conflicting views of quantum states, [epistemic and ontic], and the insistence that states must be one or the other is wrong,” he told me when I quizzed him about the story his book tells. Instead, states are “epiontic.” That is, before decoherence takes place, all the quantum possibilities are in some sense present. But decoherence and quantum Darwinism select only one of them as an element of our observable reality, without any need to assign all the others a classical reality in some other world. The other states exist in an abstract space of possibilities, but they stay there, never getting the chance to grow via entanglement into observable realities.

I wouldn’t want to claim that Zurek’s picture clears up quantum mechanics at last. Why, for example, does this outcome get selected in a given measurement and not that one? Must we (as Bohr and Heisenberg insisted) just accept that it happens randomly, without any cause? And at what point does the quantum world commit itself irrevocably to a particular measurement outcome, such that we can no longer “gather up” a superposition from the entangled web of interactions between object and environment? And most importantly: How can we test the theory more rigorously?

Some experts I’ve spoken to about Zurek’s picture express guarded enthusiasm. Sally Shrapnel of the University of Queensland in Australia, for instance, told me that Zurek’s program “represents an elegant approach to explaining the emergence of classicality from the basic postulates of quantum theory,” but that it still doesn’t address “the thorny question of what the underlying ‘quantum substrate’ actually is.” How, for example, are we supposed to think about the domain in which all possibilities still exist before decoherence? How “real” is it?

Renato Renner of the Swiss Federal Institute of Technology Zurich is not persuaded that resolving the conflict between the Copenhagen and many-worlds interpretations solves all the problems. He points out that it’s possible to construct weird yet experimentally feasible scenarios in which different observers can’t agree on the outcome. Even if such exceptions seem contrived, he thinks they show that we’ve yet to find a quantum interpretation that really works.

Still, the philosophy of Zurek’s approach seems right to me. Instead of trying to concoct elaborate stories to resolve the measurement problem of quantum mechanics, why not patiently and carefully work through what standard quantum mechanics can say about how information regarding a quantum object gets out into the observable world? Here the quantum pioneers left a lot of work unfinished in the revolution they started a century ago, prematurely foreclosing the issue (usually by insisting on the Copenhagen interpretation or just accepting it without question). Now we can at least hope to complete that task.