After decades of research, quantum computers are approaching the scale at which they could outperform their "classical" counterparts on some problems. They will be truly practical, however, only when they implement quantum error correction, which combines many physical quantum bits, or qubits, into a logical qubit that preserves its quantum information even when its constituents are disrupted. Although this task once seemed impossible, theorists have developed multiple techniques for doing so, including "surface codes" that could be implemented in an integrated-circuit-like planar geometry.
For ordinary binary data, errors can be corrected, for example, using the majority rule: A desired bit, whether 1 or 0, is first triplicated as 111 or 000. Later, even if one of the three bits has been corrupted, the other two "outvote" it and allow recovery of the original data. Unfortunately, the "no-cloning" theorem of quantum mechanics forbids the duplication (or triplication) of a qubit.
Moreover, the power of quantum computing emerges from having arbitrary mixtures of bit values. Since any combination is valid, it would seem impossible to detect a change from the original combination, let alone correct it.
Figure. "Complicated calculations fail as the systems get out of hand due to perturbations," says Rainer Blatt of the Institute of Experimental Physics at the University of Innsbruck and the Institute of Quantum Optics and Quantum Information. "Using error correction, this process can be contained."
"Most people who were doing quantum information in the '80s and '90s would say we'll never be able to do error correction in these systems, because it's analog error correction," explained Raymond Laflamme of the University of Waterloo and the Perimeter Institute, both in Waterloo, Ontario.
Fortunately, those experts Laflamme spoke of were mistaken.
Unlike a classical bit that has a value of either 0 or 1, a single qubit can encode a weighted "superposition" of both values. A set of N qubits can thus represent 2N different states simultaneously. There is strong mathematical evidence that devices handling qubits could solve some large problems much faster than traditional computers could, even in principle.
In the mid-1990s, mathematician Peter Shor, then at AT&T Bell Laboratories in Murray Hill, NJ, described a powerful example of using such a quantum algorithm to efficiently factor large numbers. Because public-key encryption relies on this task being impractically slow, the result helped transform quantum computing from a physics curiosity to a technology with potential national security implications.
Quantum computations exploit the fact that the ultimate weights assigned to different configurations of qubits are the sum of contributions that can be positive, negative, or even complex numbers. The algorithms combine qubits so that these contributions reinforce one another for the desired solution, but cancel each other out for the many incorrect configurations. A key step in Shor's algorithm, for example, is finding the repetition period of a sequence of numbers generated from the factoring target, essentially by performing a Fourier transform on the entire sequence that produces a peak at the correct periodicity.
The final step is measuring the qubits to see which are 1 and which are 0. The probability of a particular outcome is proportional the square of the corresponding weight. The measurement leaves each qubit unambiguously in the measured state.
Proper cancellation demands that quantum weights be preserved ("coherent") until measurement. However, qubits are highly sensitive to uncontrolled interactions with their environment, which effectively measures them prematurely. Experimenters have been steadily reducing the rate of this decoherence in various candidate systems, but some errors remain inevitable.
Digital Quantum Correction
Not long after introducing the factoring algorithm, Shor and, independently, Andrew Steane of Oxford University showed how these errors could be corrected. Just as "quantum mechanics is both a wave and a particle, quantum computing is both digital and analog," said Shor, now at the Massachusetts Institute of Technology (MIT). "You have to isolate the error correction into the digital piece," protecting the analog piece.
This isolation is achieved by measuring a carefully chosen combination of qubits. Before this measurement, a particular qubit might have been disturbed by its environment so that, for example, a pure 1 gains some small admixture of 0. If the measurement returns 1, however, this admixture is erased. Sometimes, however, the measurement will return 0, meaning that there has been a complete "spin flip."
"The errors live in a continuous space, but when we do quantum error correction the way it's conventionally done, by measuring [particular qubits], that in effect digitizes the errors," said John Preskill of the California Institute of Technology. "Then when you know what the error is, you know what operation to apply to correct it."
Critically, Preskill added, "You don't want to measure the logical qubit because that would disturb it," destroying its usefulness for further calculation. To avoid this, the logical information is encoded not in individual qubits, but in their relationship, known as entanglement.
"Quantum computing is both digital and analog. You have to isolate the error correction into the digital piece," protecting the analog piece.
This procedure can be illustrated with an oversimplified version of Shor's scheme, which superficially resembles the classical majority-rule, with a logical 1 represented by three physical qubits 111 and a logical 0 by 000. If one qubit undergoes a spin flip from 0 to 1 or 1 to 0, the three bits no longer agree, but measuring them all to find out which one flipped would destroy any quantum information.
Instead, one can measure, for example, whether the first two qubits disagree and whether the second two qubits disagree. These two measurements suffice to determine whether one bit was flipped, and which of the three it is, without providing any information about what the actual values are. The disturbed physical qubit can then be flipped back, even without knowing its value. The actual measurement of whether two qubits agree is done by introducing extra "ancilla" qubits in a well-defined state and introducing quantum gates that let the other qubits modify the ancilla, transferring the noise to it.
Shor's scheme actually used nine physical qubits, which also corrects a phase-flip error or combinations of a spin flip and phase flip. Subsequent work by Laflamme and others achieved single-qubit correction with five physical qubits.
In the quarter-century since the first codes, theorists have devised many new schemes as they wait for experimental systems complex enough to test their ideas. To improve coding efficiency and to protect against more complicated errors, researchers developed more complex codes that allow correction of multiple bits.
Many of these schemes are "stabilizer codes" that, like Shor's, measure specific combinations of qubits to force the system to either retreat to its undisturbed state or to reveal specific "error syndromes" that allow the error to be corrected. Importantly, some codes guarantee perfect accuracy, as long as the physical error rate is below some threshold.
Recently, interest has turned to "surface codes," which can be implemented in two-dimensional geometry like computer chips, using only neighboring qubits. Most importantly, said Preskill, "surface codes can tolerate a higher noise rate than other families of codes that we know about," which will be critical for the error-prone first generations of machines.
However, the surface codes require many physical qubits per logical qubit. Indeed, with current experimental error rates in multi-qubit devices somewhat below 1%, large calculations like those needed for quantum chemistry might require 1,000 physical qubits per logical qubit, Preskill said. "If the error rates were considerably lower, then maybe there would be better things to do than the surface code."
One alternative was found, surprisingly, by physicists studying the nexus of general relativity and quantum mechanics, who determined that an exotic mathematical connection between these fields embodies quantum error-correcting codes. "They are efficient in the sense that they can protect a lot of information without a lot of overhead," said co-discoverer Daniel Harlow, now at MIT. Still, he cautions that, for now, "there are a lot of practical hardware considerations that are probably going to be more important."
As theorists explore error-correction schemes, experimentalists have been steadily improving various physical implementations of qubits. Although there is still no clear winning platform, some groups have demonstrated devices with more than 50 qubits.
One such team, led by John Martinis at Google and the University of California, both in Santa Barbara, CA, studies qubits based on superconducting circuits. Their near-term goal is to demonstrate "quantum supremacy" with these modest-size uncorrected devices, and only later to implement error correction and larger assemblies. Nonetheless, Martinis said, "Everyone understands that in the long term we want to do error correction."
The theory underlying error digitization is clear, but it still must be thoroughly tested experimentally, Martinis said. "Maybe there's some new things you don't understand, or maybe the requirements are a lot harder than you think experimentally ... This whole error model has to work so that you can get exponentially small errors" even in complex calculations.
As theorists explore new error-correction schemes, experimentalists have been steadily improving various physical implementations of qubits.
Experiments to date show that "the cross-talk between qubits is more important than what people had thought," Laflamme said, for example because the control circuitry can affect multiple physical qubits. "The biggest issue is how correlated the noise is," Preskill agrees. "It may be that as things advance, the scalability prospects will look better for some platforms than others just on the basis of how effective quantum error correction is."
Error correction will be an important milestone in the field, Laflamme said. "If we would be able to do fully fault-tolerant quantum error correction today, we would have a quantum computer."
Fowler, A.G., Mariantoni, M., Martinis, J.M., and Cleland, A.N.
Surface codes: Towards practical large-scale quantum computation, Phys. Rev. A 86, 032324 (2012)
Beale, S.J., Wallman, J.J., Gutierrez, M., Brown, K.R., and Laflamme, R.
Coherence in quantum error-correcting codes, Phys. Rev. Lett. 121, 190501 (2018)
How Space and Time Could Be a Quantum Error-Correcting Code, Quanta Magazine, January 19, 2019, http://bit.ly/2Xrnpgg
©2019 ACM 0001-0782/19/10
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.