Recent claims of quantum speedup face intense scrutiny as researchers re-evaluate whether reported advantages hold up under rigorous testing, and a new analysis by J. Tuziemski, J. Pawłowski, P. Tarasiuk, Ł. Pawela, and B. Gardas challenges the validity of current benchmarks. The team investigates whether reported speedups survive when accounting for the substantial overhead often omitted from analyses, such as readout, transpilation, and thermalization, which can significantly distort results. Their work demonstrates that previously claimed runtime advantages for algorithms including approximate QUBO solvers, restricted Simon’s problem implementations, and a BF-DCQO hybrid algorithm, do not withstand careful benchmarking against optimised classical baselines. This research highlights the critical need for comprehensive time accounting and appropriate reference selections when assessing quantum supremacy on near-term, noisy intermediate-scale quantum (NISQ) hardware, and suggests that credible claims of runtime advantage remain elusive.
Conventional analyses often omit substantial overhead, such as readout, transpilation, and thermalization, yielding biased assessments. While excluding seemingly unimportant parts of the simulation may seem reasonable, on most current quantum hardware a clean separation between “pure compute” and “overhead” cannot be experimentally justified, potentially distorting “supremacy” results. In contrast, for most classical hardware total time approximates compute plus a weakly varying constant, leading to robust claims.
Third-Order Simulated Annealing for Ising Models
This work details algorithms and experimental setups for solving optimization problems, particularly those involving Ising models and their potential application in quantum computing. The primary algorithm explored is Simulated Annealing, a method for finding approximate solutions to complex optimization challenges. The team developed two variations: a Quadratic Simulated Annealing algorithm, optimized for problems with simple interactions, and a Third-Order HUBO Simulated Annealing algorithm, which handles more complex interactions. This commitment to reproducibility ensures that other researchers can build upon their work and verify their findings. The team emphasizes the importance of efficient data handling and parallel processing to maximize computational speed.
Quantum Speedup Claims Fail Rigorous Testing
This research rigorously re-evaluates recent claims of runtime advantage in both annealing and gate-based quantum algorithms, demonstrating that reported speedups often disappear when subjected to comprehensive end-to-end runtime analysis and comparison against optimized classical baselines. The research highlights a critical issue: conventional analyses frequently omit substantial overheads, including readout, transpilation, and thermalization, leading to biased assessments of quantum performance. While excluding seemingly insignificant simulation components may appear reasonable, the team demonstrates that a clear separation between “pure compute” and overhead is often experimentally unjustified on current quantum hardware. In contrast, classical computations exhibit a predictable runtime, comprising compute time plus a weakly varying constant, allowing for robust and reliable comparisons.
The team scrutinized analyses of quantum annealing and a restricted Simon’s problem, finding that the algorithms did not outperform optimized classical approaches when all time components were considered. Further investigation revealed that recently claimed runtime advantages for a hybrid quantum-classical algorithm were also unfounded when the time required for classical hyperparameter tuning was included. The findings demonstrate that achieving a demonstrable runtime advantage with current quantum hardware remains a significant challenge.
Runtime Overhead Masks Quantum Speedups
This research rigorously re-examines recent claims of computational speedups achieved by quantum algorithms, focusing on the critical importance of accurately defining and measuring runtime. The team demonstrates that reported advantages often disappear when a comprehensive accounting of all time components, including overhead from tasks like data transfer, device programming, and classical optimization, is included in the assessment. This work highlights that separating “pure compute” time from these essential overheads is often unjustified on current hardware, potentially distorting claims of quantum supremacy. Specifically, the researchers scrutinized analyses of approximate QUBO solving and a restricted Simon’s problem, finding that the algorithms did not outperform optimized classical baselines when all time components were considered. The team emphasizes that a consistent and comprehensive approach to runtime measurement, including all necessary overheads and a strong classical reference, is essential for credible comparisons. The findings indicate that demonstrating a clear runtime advantage for quantum algorithms remains an elusive goal, requiring meticulous attention to methodological detail and a rigorous accounting of all time components.