Should We Trust Quantum Computers?

Imagine the following scenario. You are responsible for the safety of a large chemical plant. By some unfortunate administrative oversight, your plant’s entire supplies of Xotheasium, Yucruinium, and Zatreilium have been put on conveyor belts and are moving toward an enormous vat. The Xotheasium and Yucruinium will both reach the vat 15 minutes from now, and the Zatreilium will reach the vat 20 minutes from now. It is well-known, of course, that Xotheasium and Zatreilium react explosively, and if they come in contact, your plant will explode and anyone inside will surely die. And as luck would have it, the control system for the conveyor belt is currently restarting due to a Windows Update and will not be usable for the next 15 minutes.

You would have enough time to stop the Zatreilium, but the big problem is the Xotheasium and Yucruinium. As far as anyone knows, these exotic chemicals have never been in contact with each other, and their chemical properties are far too complex to simulate on any computer. But Yucruinium shares many traits with Zatreilium, and so experts predict that there is a 50% chance that the reaction would be explosive, which would destroy your plant and everyone inside. But there is also a 50% chance that Yucruinium is inert and that there would be no reaction.

So as the safety manager, you command that everyone else immediately evacuate the plant, which they do. But you yourself are torn between two options:
(1) Evacuate. It’s a large plant, and it takes 10 minutes to get in or out. This means that no one will be around to stop the conveyor belt when the Windows Update finally finishes, and in either 15 or 20 minutes your plant will surely explode and your company will be forced into bankruptcy.
(2) Wait. Hope that Yucruinium is inert. If so, you’ll be able to stop the conveyor belt and save the plant and the company. If not, you die.

Your self-preservation instincts kick in and you decide to evacuate. But just then, you get a notification on your phone. A paper has just been posted to the arXiv which claims to show that Yucruinium is, in fact, inert. The calculations were done on the world’s first and only fault-tolerant quantum computer. You trust that the authors have reported their results correctly. So the question is: Do you trust the results of the quantum computer?

Verifying a powerful device

Ridiculous life-or-death chemical plant scenarios aside, the issue of how to verify the answer to a problem you can’t solve yourself is a well-studied one in computer science. For some problems it’s easy. Consider factorization of the product of two very large prime numbers: if the numbers are large enough, this is impossible to solve with today’s computers; but if someone tells you one of the factors, you can easily verify whether it is a correct answer by simply performing the division.

Unfortunately, for some problems it’s not easy at all. Think about the so-called “traveling salesman problem” (TSP), where you must find the shortest route that travels through a set of, say, 50 cities. Good heuristics exist to find approximate solutions to this problem, but an exact solution is infeasible — consider the fact that there are 50! (factorial) possible routes. But even if someone gives you a route from their magic-TSP-solver and claims that it’s the shortest, how can you verify this? To have complete confidence, you’d still have to solve the problem yourself. And we’ve already said that can’t be done.

The first time I became aware of this problem, I thought surely it was impossible. What approach could you possibly use to verify answers from some super-powerful device that solves “unsolvable” problems?

Then I saw a talk by Anne Broadbent with a very enlightening illustration (starting at 8:40 of the video), which I will loosely paraphrase here. Imagine you have an app on your phone which promises to exactly count the number of leaves on any given tree. You go to the largest tree in the world and open the app. It tells you the tree has exactly 893,145 leaves.

How do you know whether you can trust this app? (No, you can’t climb the tree and count the leaves yourself. And even if you could, I doubt you’d have much confidence in your own result after you finished counting.)

You can simply walk up to the tree (turn off your phone first, if you’re paranoid) and pick off 18 leaves (or any number you choose). Then, open the app again and see what it says. If it’s not exactly 893,127 (which is 893,145 – 18), then you know it can’t be trusted. (Ok, yes, I’m assuming no leaves were blown away by the wind.)

But what if it does give you the right answer? It still doesn’t prove anything; it could have been a lucky guess. Let’s be generous and say there was a 1% chance it could have guessed correctly. So you now have 99% confidence that the app is counting the leaves correctly. What if that’s not good enough? Just repeat the leaf-removal process. If it gives you the correct answer again, you now have 99.99% confidence. And so on. You can do this as many times as you want to increase your confidence as much as you would like.

The simplicity and elegance of this approach is remarkable. And what’s even more amazing is that it bears so much resemblance to the approach that is used for verifying the results of quantum computations. (This is all in theory, of course. No one has a quantum computer yet whose results are actually good enough to be verifiable — more on that later.)

Verification of quantum computations has been studied for a number of years, and the final pieces of the puzzle were recently put together by Urmila Mahadev (video and paper). She proved (with reasonable assumptions) that it is actually possible for a classical device — i.e., a non-quantum computer — to ask a very specific series of questions to a quantum computer that can prove, with a particular degree of confidence, that a given quantum computation was performed correctly. And the degree of confidence can be increased by simply asking more of these questions. The trick, very loosely speaking, is for the classical device to encode some “secret” in the desired computation in a way that the quantum computer is unable to detect, and then to ask questions which can prove (with some probability) whether that secret was faithfully maintained throughout the computation. It’s exactly like the tree and the leaves — you gain an advantage over the leaf-counting app by keeping the “secret” of how many leaves you removed, and you can use this to verify that the app is operating correctly.

What about noisy, error-prone devices?

So, this is all great if we have a quantum computer that gives us exactly-correct answers. But, in fact, such devices (often called “error-corrected” or “fault-tolerant” quantum computers) don’t exist right now, and likely won’t exist for quite a few years. The quantum computers we have now are noisy and error-prone, which means they can only give approximate answers. And the scheme we discussed doesn’t work very well if we know that the quantum computer always makes errors and never gives an exactly-correct answer.

Near-term quantum computers are often referred to as NISQ (noisy intermediate-scale quantum) devices, and the strategy for testing the performance of these devices often centers around benchmarking a device’s performance, rather than outright verification of results. Benchmarking is a process in which the lowest-level operations of the device — the building blocks underlying any algorithm or computation — are thoroughly tested and characterized. Typically this produces a number for each low-level operation (i.e., “gate”), called the “fidelity”, that indicates how closely the operation matches the ideal behavior. For current devices, this number is often something like 99% or 99.9% per operation — but this varies widely depending on the hardware being used and the operation being performed. These numbers seem great, but since most useful quantum algorithms require thousands or millions of operations (or more!), the errors quickly accumulate to the point where these applications are out of reach for now. This is why researchers are focused so heavily on developing an error-corrected device. Not only will this eventually allow us to implement large-scale quantum algorithms, such as Shor’s integer factorization algorithm, but it will also enable the secret-based verification strategies we discussed earlier.

Now, a short aside for a topic of personal interest (this is my blog, after all). There’s a possible near-term application of quantum devices known as “analog quantum simulation”. This concept actually existed before the modern ideas of gate-based quantum computing. The idea is to take some system from nature that is too hard to simulate on our existing computers, say a very complex molecule of some kind, and develop a quantum computing device that “emulates” or “simulates” the behavior of that natural system, so that we can learn more about it. These devices are also error-prone, meaning that exact verification schemes are unlikely to be helpful, but we should be able to benchmark the performance of these devices in much the same way that we benchmark gate-based quantum computers. My group recently posted a paper demonstrating a few possible strategies. The overarching idea (which is the same as for gate-based benchmarking) is to make the device perform complicated sequences of operations which are specially designed such that, if the system is perfect and error-free, the quantum system will be left unchanged. And so by running these sequences on your actual device and measuring how much the state changes, you can gain some sense of how error-prone or noisy your device is.

Learning more

If you want to learn a little more about the fascinating topic of verifying quantum computations, I highly encourage you to watch one or both of these talks (which were also linked above):

“How to Verify a Quantum Computation” by Anne Broadbent
Video: https://www.youtube.com/watch?v=X1hSuqpLcA8
Paper: https://arxiv.org/abs/1509.09180

“Classical Verification of Quantum Computations” by Urmila Mahadev
Video: https://www.youtube.com/watch?v=RQGW4KcLMIQ
Paper: https://arxiv.org/abs/1804.01082

Have another related resource you’d recommend? Please link to it in the comments!

(By the way, you came to your senses and quickly evacuated the chemical plant. That paper hadn’t even been peer-reviewed yet. And miraculously, while you were on your way out, the Windows Update terminated with a blue screen of death, causing the system to restart into Safe Mode, which triggered a bug in the control software that suddenly stopped all the conveyor belts. The plant is saved! But alas, the inertness of Yucruinium will have to be decided by the Nature referees.)

Why Qubit Count Is Not Everything

The qubit is often portrayed as the fundamental unit of information in a quantum computer. This seems natural, since in ordinary classical computers, information is always represented in bits and bytes — sequences of 0s and 1s. And therefore, bits and bytes are often used to measure everything from processor architecture (32-bit or 64-bit) to hard drive size (terabytes) to network speeds (gigabits per second).

And so, as companies have begun to develop early-stage quantum computers, nearly always the first attribute reported is the number of qubits that the device contains. 5 qubits. 10 qubits. 19 qubits. 72 qubits.

But wait — what does it mean to say that a quantum computer “has” 72 qubits? Should we think of this in terms of the “bitness” of processor architecture — as the amount of information that can be processed at once? Or is this somehow a quantum memory, and 72 qubits is the information storage capacity of the device? But either way, a quantum computer with 72 qubits must be far superior to one with 10 qubits, right? Answering these questions requires a better understanding of what a qubit actually is.

What is a qubit?

If you ask this question to someone knowledgeable, you’ll usually get some version of the following answer: “An ordinary bit can have a value of either 0 or 1, but a qubit can have a value that is any arbitrary superposition of 0 and 1.” Or, you’ll be shown a picture of a sphere, with 0 at the top and 1 at the bottom, and you’ll be told that a qubit can represent any point on the surface of that sphere. So for a qubit, there are not just two possible values, but infinitely many possible values! So of course quantum computers are more powerful!

That’s fine, and it’s a mathematically accurate description, but it does very little to help us understand conceptually how these “qubits” can help us. In fact, without some background in quantum physics, it can even be a little misleading. If one qubit has infinitely many possible values, doesn’t that mean we could just encode an arbitrarily large amount of information and store it inside a single qubit?

The key physical principle to understand here — and one of the things that really constrains how we can build a quantum computer, and how powerful they can be — is that when you try to read the value of a qubit, you will always either get a 0 or 1, and the value of the qubit itself also immediately “collapses” to 0 or 1. The information about the delicate superposition you encoded is completely lost. (This is an example of a paradox in quantum physics known as the measurement problem — the simple act of looking at the qubit actually changes its state!) So in other words, if you tried to encode the full-length Titanic movie into the state of one of your qubits, when you try to play it back, you’ll be sorely disappointed (or greatly relieved, depending on your feelings about the movie).

Ok, so if we have a 72-qubit quantum computer, and when we read those qubits we can get only 72 bits of information out, that doesn’t sound very exciting. Where’s the power of quantum computing? Well, the measurement problem only applies if we look at the qubits! So, obviously, we need to make things happen while we’re not looking. In other words, the qubits need to interact with one another directly.

Neighborly qubits

From the discussion above, it is hopefully clear that we shouldn’t use the number of qubits as the primary measure of quantum computing power. Instead, the power of quantum computing must come from interactions among the qubits — so we should be looking at how many of the qubits can interact with one another at a time, how long these interactions take, and how reliable they are. In addition to this, we also need to know how long the qubits themselves can survive. Today’s quantum computers are very imperfect, and even small fluctuations in the environment (such as tiny, stray electric fields) can cause qubits to lose their information — a phenomenon known as “decoherence”. (What’s really important is the ratio of the qubit interaction time to the qubit lifetime, as this gives a rough idea of how many interactions we can reliably perform before decoherence destroys our quantum information. But this is a topic for another post.)

So what does it mean to say that qubits interact with each other? There’s not really a good analogy here to classical computers, since “bits” aren’t really objects, but are typically just electrical signals that flow freely along wires. In a quantum computer, however, a qubit is typically a real, physical object of some kind — for example, a single atom, a single defect in some host material, or a very tiny superconducting circuit. This means that if you have some number of qubits, they have to be physically arranged in some way — often either in a one-dimensional chain or in a two-dimensional grid — and the control system for the quantum computer must be able to very precisely control the state of each individual qubit, as well as turn “on” and “off” the interactions between various qubits. This ability to control interactions is often called the “connectivity” of the quantum computer. Depending on the type of qubit being used, the quantum computer may implement “nearest-neighbor” connectivity, where each qubit is able to interact only with those that are sitting adjacent to it in the 1-D or 2-D layout; “all-to-all” connectivity, where each qubit is able to interact with any other qubit in the system; or something in between.

Connectivity is important because it determines how efficiently we can run quantum algorithms, which typically require some complex series of interactions among all of the qubits in the device. If the qubits have all-to-all connectivity, these interactions can be implemented directly; but if the connectivity is limited (e.g., nearest-neighbor), then implementing an interaction between two qubits that are physically distant actually requires several interactions with intermediary qubits in order to achieve the desired effect. And because, as discussed above, the number of interactions we can run is limited by the qubit interaction time and the qubit lifetime, an increase in the required number of physical qubit interactions due to limited connectivity can significantly hinder the complexity of the quantum algorithms we can successfully run on the device.

So what’s the right way to compare quantum computers?

This doesn’t have a cut-and-dried answer, but a meaningful comparison certainly needs to take into account all of these variables — qubit count, qubit connectivity, qubit interaction time, and qubit lifetime. One recent attempt at such a metric is known as “quantum volume”, introduced by several researchers at IBM. This attempts to assign a numerical value to a quantum computer that, very loosely, indicates the maximum number of qubits which can all successfully interact with one another and maintain a high probability of producing a correct result. It’s a bit clunky, and certainly less headline-friendly than a simple qubit count, but at least it’s a good-faith effort to capture the full picture. If you’d like to read more about this and other metrics, there’s a recent article in Science that describes some of the various techniques that companies and universities have been using: “How to evaluate computers that don’t quite exist.”

(In the longer term, when we have fault-tolerant quantum computers enabled by quantum error correction, we will likely group large sets of physical qubits together into “logical qubits”, which automatically maintain their quantum state and are robust to errors. At that time, we will in fact care much more about logical qubit count than physical qubit count. But, again, this is a topic for another day.)

All this is to say: Just as we can’t judge the performance of a modern CPU solely by its clock speed, there is far more to understanding the performance of today’s quantum computers than simply the number of qubits. We must compare quantum computers based on their performance on the algorithms that we care about.