News
Architecture and Hardware News

Error Control Begins to Shape Quantum Architectures

The overhead of error correction presents a serious challenge to scaling up quantum computing and may produce unexpected winners.
Posted
  1. Article
  2. Author
sound waves on both sides of a quantum computer, illustration

Quantum computing has a crucial weakness that may severely delay, if not kill outright, its chances of becoming a way of running algorithms that classical computers cannot handle: its susceptibility to noise.

Conventional electronic circuits face their own problems of how to deal with random changes to values in memory or circuits caused by cosmic rays and other interference. Codes that exploit just a few redundant data bits allow those random errors to be corrected on the fly.

The same core principle works for quantum computers, but with one key difference: the error correction must take account of the subtle changes in state that quantum circuits move through before the final, collapsed state is read out from each quantum bit (qubit). At that point, information that would signal an error is itself destroyed.

While attempts to show practical quantum error correction (QEC) working on actual hardware have come relatively recently, the concept is almost as old as the first algorithm. Less than a year after he presented his seminal algorithm for efficiently factoring large primes in 1995, Peter Shor, now professor of applied mathematics at the Massachusetts Institute of Technology, developed a code to catch and correct errors in qubits.

uf1.jpg
Figure. Researchers at QuTech integrated high-fidelity operations on encoded quantum data with a scalable scheme for repeated data stabilization.

Shor showed it is possible to spread the information across multiple qubits and stabilizer qubits that are analogous to the parity bits used in digital error correction. Circuitry can analyze the symmetry properties of all the qubits in the code word without affecting the entanglement of the data qubits by just reading the stabilizer qubits.

Because they often rely on fragile properties such as electron orbitals or spin states, qubits are far more susceptible to unwanted changes than conventional electronic bits, which leads to poor performance in practical circuits. Production machines need the effective error rate per qubit to be less than one in a quadrillion; today, the error rate for physical qubits is one in 1,000 at best. Such high error rates call for as many as 30 additional qubits needed to protect just one physical qubit.

Even with high overheads, QEC as it exists today has limitations that increase the difficulty of making quantum computers reliable. Stabilizer codes cannot deal with errors that are generated by quantum-gate manipulations. That calls for additional flag bits that alert the control electronics to errors as they occur.

Stabilizer codes also do not work for all the operations needed for universal quantum computing. They can only work for gates in the Clifford group that apply a restricted set of phase and magnitude operations. Gates outside that group, such as the T-gate, need to be protected by other means.

There are proposals to encode qubits in a way that might make it possible to detect the subtle phase errors that afflict non-Clifford gates. Unfortunately, some recent experiments have questioned whether this encoding will support useful error detection in those gates.

Instead, in today’s experimental machines, the additional gates needed for full quantum computing are handled by altering the topology of the quantum circuits. The non-Clifford gates are emulated by preparing specific quantum states and then applying them to the main circuit through gates that are members of the Clifford group. To protect against errors that might appear in these prepared states, the control electronics use trial and error to construct them until they pass tests based on the outputs from flag qubits, a process called magic-state distillation.

The disadvantage of distillation is it imposes in many quantum-computing architectures another large hardware overhead. Production systems may need to deploy many magic-state factories in parallel to avoid the risk of the system losing coherence before the magic states are prepared.

Differences in overhead for QEC and magic-state distillation in the many proposals for quantum-computing hardware may lead to some quantum computing architecture that can only support 20 or so qubits today possibly overtaking technologies that have so far been able to assemble several times more physical qubits.

The connectivity between qubits together, for example, can have an impact on effective code overhead. The nearest-neighbor architecture of superconducting machines, such as those developed by IBM Research, calls for codes that need more than 20 qubits to protect a single physical element.

In the late spring of 2022, a team at Austria’s University of Innsbruck working on a trapped-ion machine with just 16 physical qubits showed they could obtain a lower QEC overhead.

High connectivity makes it possible to trade code size against execution time. “For repeated error correction measuring stabilizers sequentially or in parallel are equivalent,” Postler says, adding that computer designers could reduce qubit requirements at the cost of higher execution time by reusing qubits as stabilizers in series for the magnitude and phase corrections.


“A higher connectivity gives you more flexibility in the choice of the error correction code. The color code we used requires high connectivity and therefore suits trapped-ion quantum computers quite well,” says Lukas Postler, a Ph.D. student at the University of Innsbruck.


Some machines may benefit from computers continuing to need magic-state distillation. Photonic machines tend to need less area for distillation than competing technologies. In their work on developing quantum methods for studying the behavior of lithium-ion battery electrolytes, a joint project of Mercedes-Benz Research and Development North America and Palo Alto, CA-based PsiQuantum found because the estimated footprint for magic-state distillation would likely be around 2% of the overall footprint, they could more easily parallelize the algorithm on a photonic machine, leading to reduced runtimes.

Other architectures may deal with the problem by moving to approaches that switch between different forms of error correction on the fly, although it is uncertain as to whether alternative codes will handle gates outside the Clifford group better.

The nature of the errors themselves is important and may give less-mature architectures an advantage when it comes to scaling up usable qubit capacity. A group led by Jeff Thompson, associate professor of electrical and computer engineering in the Princeton Institute of Materials at Princeton University, is working on computers built around un-ionized ytterbium atoms. A recent collaboration with QEC specialists working at Yale University took advantage of the way in which some errors can be detected directly without having to interpret stabilizer qubits.

“A higher connectivity gives you more flexibility in the choice of the error correction code,” says the University of Innsbruck’s Lukas Postler.

The Princeton machine’s design pushes electrons in the ytterbium atoms into very high-energy states that are used as qubit states. If such a state decays unexpectedly, it is highly likely to move into an orbital that can easily be detected when probed by a laser operating at a different frequency to the one used to manipulate the qubit states. These “erasure escapes,” as Thompson calls them, help improve the practical performance of stabilizer-based QEC because it is called upon to detect a relatively small sub-set of errors. By carefully designing how gate operations are performed, it is possible to increase the proportion of erasure escapes. “We are now working on a project to work out how many errors we can convert,” Thompson says.

Photonic machines such as Psi-Quantum’s also can take advantage of erasure escapes. The qubit state in this kind of computer relies on the presence of a photon in one of a pair of waveguides. That makes it easy to determine that a photon and its associated qubit state has been lost completely, if there is no photon detected in either waveguide.

“Photonic qubits are a system in which erasure errors have been part of the discussion for a long time,” says Thompson. “To my knowledge, our proposal is the first to consider the implementation and consequences of an erasure-dominated noise model in matter-based qubits, and it was pretty natural to do it with the particular atomic platform we considered. However, I think the idea can be generalized and it is catching on. I know of theoretical work trying to extend the idea to trapped ions and superconducting qubits.”

Other approaches try to prevent some errors from occurring. A detailed analysis of interactions between qubits and the controlling electronics on IBM’s machines carried out by researchers at a group of Australian universities has provided the basis for commercial spinout Q-Ctrl to develop ways to reduce the overhead of QEC on superconducting machines. In practice, problems such as drift in electronic amplifiers often lead to phase errors correlated across multiple qubits that reduce the efficacy of existing QEC methods. If the characteristics of the drift are predictable, control electronics can compensate for the problem and, in turn, make it possible to use leaner codes and free up valuable qubits for actual data.

Thompson sees these more-holistic approaches to quantum computing that consider the interactions between QEC and physical implementation becoming increasingly important. “I guess the way I see it is that if you are serious about fault-tolerant computing, then all you should care about is logical-level error-rate, connectivity, and clock speed.


“If you are serious about fault-tolerant computing, then all you should care about is logical-level error rate, connectivity, and clock speed.”


“Until you have a whole architecture fleshed out, you can’t really evaluate those metrics. Therefore, when we are working on components, we are kind of groping around in the dark. I think an exciting direction in the field is trying to lay out the possibilities for large-scale fault-tolerant architectures, and then turning the crank to see how physical implementation choices carry through to the final system metrics,” Thompson concludes.

*  Further Reading

Postler, L. et al.
Demonstration of fault-tolerant universal quantum gate operations, Nature 605, 675–680 (2022)

Kim, I.H., Lee, E., Liu Y.-H., Pallister, S., Pol, W., and Roberts, S.
Fault-tolerant resource estimate for quantum chemical simulations: case study on Li-ion battery electrolyte molecules, arXiv preprint: 2104.10653 (2021). https://arxiv.org/abs/2104.10653

Wu, Y., Kolkowitz, S., Puri, S., and Thompson, J.D.
Erasure conversion for fault-tolerant quantum computing in alkaline earth Rydberg atom arrays, Nature Communications 13, 4657 (2022)

Edmunds, C.L., Hempel, C., Harris, R., Frey, V., Stace, T.M., and Biercuk, M.J.
Dynamically corrected gates suppressing spatiotemporal error correlations as measured by randomized benchmarking, Physical Review Research 2, 013156 (2020)

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More