The computational gap between quantum and classical processors

The second consequence of many-body interference is classical complexity. A central task for quantum computing is to identify the computational cost gap between quantum and classical computers on specific computational tasks. We approached this in two ways: (1) through a combination of theoretical analysis and experiments, we revealed the fundamental obstacles to known classical algorithms in achieving the same outcome as our OTOC calculations on Willow, and (2) we tested the performance of nine relevant classical simulation algorithms by direct implementation and cost estimation.

In the first approach we identified that quantum interference is an obstacle for classical computation. A distinct characteristic of quantum mechanics is that predicting an outcome of an experiment requires analyzing probability amplitudes rather than probabilities as in classical mechanics. A well known example is the entanglement of light that manifests in quantum correlations between photons, elementary particles of light, that persist over long distances (2022 Physics Nobel Laureates) or macroscopic quantum tunneling phenomena in superconducting circuits (2025 Physics Nobel Laureates).

The interference in our second order OTOC data (i.e., an OTOC that runs through the backward and forward circuit loop twice) reveals a similar distinction between probabilities and probability amplitudes. Crucially, probabilities are non-negative numbers, whereas probability amplitudes can be of an arbitrary sign and are described by complex numbers. Taken together, these features mean they contain a much more complex collection of information. Instead of a pair of photons or a single superconducting junction, our experiment is described by probability amplitudes across an exponentially large space of 65 qubits. An exact description of such a quantum mechanical system requires storing and processing 265 complex numbers in memory, which is beyond the capacity of supercomputers. Moreover, quantum chaos in our circuits ensures that every amplitude is equally important, and therefore algorithms using a compressed description of the system require memory and processing time beyond the capacity of supercomputers.

Our further theoretical and experimental analysis revealed that carefully accounting for the signs of the probability amplitudes is necessary to predict our experimental data by a numerical calculation. This presents a significant barrier for a class of efficient classical algorithms, quantum Monte Carlo, that have been successful at describing quantum phenomena in a large quantum mechanical space (e.g., superfluidity of liquid Helium-4). These algorithms rely on description in terms of probabilities, yet our analysis demonstrates that such approaches would result in an uncontrollable error in the computation output.

Our direct implementation of algorithms relying on both compressed representation and efficient quantum Monte Carlo confirmed the impossibility of predicting second-order OTOC data. Our experiments on Willow took approximately 2 hours, a task estimated to require 13,000 times longer on a classical supercomputer. This conclusion was reached after an estimated 10 person years spent in classical red teaming of our quantum result, implementing a total of nine classical simulation algorithms as a result.