Scientists at the Institute of Science Tokyo have announced a breakthrough in quantum error correction that could bring a large-scale quantum computer closer to reality.

The team has developed a new class of quantum low-density parity-check (LDPC) error correction codes that perform close to the hashing bound, the theoretical efficiency limit for quantum error correction.

“Our quantum error-correcting code has a greater than 1/2 code rate, targeting hundreds of thousands of logical qubits,” explained Kenta Kasai, Associate Professor and the main lead on the project.

“Moreover, its decoding complexity is proportional to the number of physical qubits, which is a significant achievement for quantum scalability,” he added.

Quantum computers have long promised to revolutionize fields like quantum chemistry, cryptography, and large-scale optimization. However, their progress has been stunted due to the fragile nature of qubits.

The problem it solves

Usually, qubits tend to lose their state quickly and have short coherence times. Furthermore, operations like gates and measurements introduce high error rates. The current quantum error correction methods require thousands of physical qubits to create just one logical qubit.

The new LDPC codes are designed to handle hundreds of thousands of qubits. They have a high coding rate, meaning fewer physical qubits are wasted in creating a logical qubit.

This combination of efficiency and scalability could make millions of logical qubits possible, marking a key step towards solving real-world problems.

How did they do it?

The scientists used photograph LDPC codes, a design that works well for error correction. Further, they used affine permutations, which makes code structures diverse and avoids patterns that slow down decoding.

Instead of only binary math, they use non-binary math to carry more information and improve accuracy. Then, these codes were converted to Calderbank-Shor-Steane (CSS) codes – a well-known type of quantum error correction.

To decode errors, they created an efficient method based on the sum-product algorithm, which can fix both types of quantum errors—bit-flip (X) and phase-flip (Z)—at the same time. Older codes usually handled only one type at once.

Analyzing the result

Even with hundreds of thousands of qubits, they kept the frame error rate as low as 10−4. This result is quite close to the hashing bound – the best performance theoretically possible.

“Our quantum LDPC error correction codes can potentially enable quantum computers to scale up to millions of logical qubits,” remarked Kasai.

“This will significantly improve the reliability and scalability of quantum computers for practical applications while also paving the way for future research,” he said after the completion of the experiment.

The study represents a major step towards fault-tolerant computing for practical applications, which could benefit many fields.

This study was recently published in the journal npj Quantum Information.