You would have enough time to stop the Zatreilium, but the big problem is the Xotheasium and Yucruinium. As far as anyone knows, these exotic chemicals have never been in contact with each other, and their chemical properties are far too complex to simulate on any computer. But Yucruinium shares many traits with Zatreilium, and so experts predict that there is a 50% chance that the reaction would be explosive, which would destroy your plant and everyone inside. But there is also a 50% chance that Yucruinium is inert and that there would be no reaction.

So as the safety manager, you command that everyone else immediately evacuate the plant, which they do. But you yourself are torn between two options:**(1) Evacuate. **It’s a large plant, and it takes 10 minutes to get in or out. This means that no one will be around to stop the conveyor belt when the Windows Update finally finishes, and in either 15 or 20 minutes your plant will surely explode and your company will be forced into bankruptcy.**(2) Wait. **Hope that Yucruinium is inert. If so, you’ll be able to stop the conveyor belt and save the plant and the company. If not, you die.

Your self-preservation instincts kick in and you decide to evacuate. But just then, you get a notification on your phone. A paper has just been posted to the arXiv which claims to show that Yucruinium is, in fact, inert. The calculations were done on the world’s first and only fault-tolerant quantum computer. You trust that the authors have reported their results correctly. So the question is: **Do you trust the results of the quantum computer?**

Ridiculous life-or-death chemical plant scenarios aside, the issue of how to verify the answer to a problem you can’t solve yourself is a well-studied one in computer science. For some problems it’s easy. Consider factorization of the product of two very large prime numbers: if the numbers are large enough, this is impossible to solve with today’s computers; but if someone tells you one of the factors, you can easily verify whether it is a correct answer by simply performing the division.

Unfortunately, for some problems it’s not easy at all. Think about the so-called “traveling salesman problem” (TSP), where you must find the shortest route that travels through a set of, say, 50 cities. Good heuristics exist to find approximate solutions to this problem, but an exact solution is infeasible — consider the fact that there are 50! (factorial) possible routes. But even if someone gives you a route from their magic-TSP-solver and claims that it’s the shortest, how can you verify this? To have complete confidence, you’d still have to solve the problem yourself. And we’ve already said that can’t be done.

The first time I became aware of this problem, I thought surely it was impossible. What approach could you possibly use to verify answers from some super-powerful device that solves “unsolvable” problems?

Then I saw a **talk by Anne Broadbent** with a very enlightening illustration (starting at 8:40 of the video), which I will loosely paraphrase here. Imagine you have an app on your phone which promises to exactly count the number of leaves on any given tree. You go to the largest tree in the world and open the app. It tells you the tree has exactly 893,145 leaves.

How do you know whether you can trust this app? (No, you can’t climb the tree and count the leaves yourself. And even if you could, I doubt you’d have much confidence in your own result after you finished counting.)

You can simply walk up to the tree (turn off your phone first, if you’re paranoid) and pick off 18 leaves (or any number you choose). Then, open the app again and see what it says. If it’s not exactly 893,127 (which is 893,145 – 18), then you know it can’t be trusted. (Ok, yes, I’m assuming no leaves were blown away by the wind.)

But what if it does give you the right answer? It still doesn’t prove anything; it could have been a lucky guess. Let’s be generous and say there was a 1% chance it could have guessed correctly. So you now have 99% confidence that the app is counting the leaves correctly. What if that’s not good enough? Just repeat the leaf-removal process. If it gives you the correct answer again, you now have 99.99% confidence. And so on. You can do this as many times as you want to increase your confidence as much as you would like.

The simplicity and elegance of this approach is remarkable. And what’s even more amazing is that it bears so much resemblance to the approach that is used for verifying the results of quantum computations. (This is all in theory, of course. No one has a quantum computer yet whose results are actually good enough to be verifiable — more on that later.)

Verification of quantum computations has been studied for a number of years, and the final pieces of the puzzle were recently put together by Urmila Mahadev (**video** and **paper**). She proved (with reasonable assumptions) that it is actually possible for a classical device — i.e., a non-quantum computer — to ask a very specific series of questions to a quantum computer that can prove, with a particular degree of confidence, that a given quantum computation was performed correctly. And the degree of confidence can be increased by simply asking more of these questions. The trick, very loosely speaking, is for the classical device to encode some “secret” in the desired computation in a way that the quantum computer is unable to detect, and then to ask questions which can prove (with some probability) whether that secret was faithfully maintained throughout the computation. It’s exactly like the tree and the leaves — you gain an advantage over the leaf-counting app by keeping the “secret” of how many leaves you removed, and you can use this to verify that the app is operating correctly.

So, this is all great if we have a quantum computer that gives us exactly-correct answers. But, in fact, such devices (often called “error-corrected” or “fault-tolerant” quantum computers) don’t exist right now, and likely won’t exist for quite a few years. The quantum computers we have now are noisy and error-prone, which means they can only give approximate answers. And the scheme we discussed doesn’t work very well if we know that the quantum computer always makes errors and never gives an exactly-correct answer.

Near-term quantum computers are often referred to as NISQ (noisy intermediate-scale quantum) devices, and the strategy for testing the performance of these devices often centers around *benchmarking* a device’s performance, rather than outright verification of results. Benchmarking is a process in which the lowest-level operations of the device — the building blocks underlying any algorithm or computation — are thoroughly tested and characterized. Typically this produces a number for each low-level operation (i.e., “gate”), called the “fidelity”, that indicates how closely the operation matches the ideal behavior. For current devices, this number is often something like 99% or 99.9% per operation — but this varies widely depending on the hardware being used and the operation being performed. These numbers seem great, but since most useful quantum algorithms require thousands or millions of operations (or more!), the errors quickly accumulate to the point where these applications are out of reach for now. This is why researchers are focused so heavily on developing an error-corrected device. Not only will this eventually allow us to implement large-scale quantum algorithms, such as Shor’s integer factorization algorithm, but it will also enable the secret-based verification strategies we discussed earlier.

Now, a short aside for a topic of personal interest (this is my blog, after all). There’s a possible near-term application of quantum devices known as “analog quantum simulation”. This concept actually existed before the modern ideas of gate-based quantum computing. The idea is to take some system from nature that is too hard to simulate on our existing computers, say a very complex molecule of some kind, and develop a quantum computing device that “emulates” or “simulates” the behavior of that natural system, so that we can learn more about it. These devices are also error-prone, meaning that exact verification schemes are unlikely to be helpful, but we should be able to benchmark the performance of these devices in much the same way that we benchmark gate-based quantum computers. My group recently posted a **paper** demonstrating a few possible strategies. The overarching idea (which is the same as for gate-based benchmarking) is to make the device perform complicated sequences of operations which are specially designed such that, if the system is perfect and error-free, the quantum system will be left unchanged. And so by running these sequences on your actual device and measuring how much the state changes, you can gain some sense of how error-prone or noisy your device is.

If you want to learn a little more about the fascinating topic of verifying quantum computations, I highly encourage you to watch one or both of these talks (which were also linked above):

**“How to Verify a Quantum Computation”** by Anne Broadbent

Video: https://www.youtube.com/watch?v=X1hSuqpLcA8

Paper: https://arxiv.org/abs/1509.09180

**“Classical Verification of Quantum Computations” **by Urmila Mahadev

Video: https://www.youtube.com/watch?v=RQGW4KcLMIQ

Paper: https://arxiv.org/abs/1804.01082

Have another related resource you’d recommend? Please link to it in the comments!

(By the way, you came to your senses and quickly evacuated the chemical plant. That paper hadn’t even been peer-reviewed yet. And miraculously, while you were on your way out, the Windows Update terminated with a blue screen of death, causing the system to restart into Safe Mode, which triggered a bug in the control software that suddenly stopped all the conveyor belts. The plant is saved! But alas, the inertness of Yucruinium will have to be decided by the Nature referees.)

]]>Fast forward to today, and we still don’t have the ability to implement a large-scale quantum computer that can fulfill the promises of the 1990s, nor are we very close to doing so. Yet interest in quantum computing has exploded far beyond academia. In industry, sectors as varied as automotive, aerospace, pharmaceuticals, and finance have begun developing strategies for implementing quantum technology. And governments and militaries around the world are investing heavily to ensure that they do not fall behind in what some are calling the next “space race” for technological supremacy.

Fueling this interest are significant financial investments in promoting and developing quantum computing devices from a growing number of companies. This includes many of the large technology stalwarts, such as IBM, Google, Microsoft, Intel, and Honeywell. And it also includes companies that have formed specifically to build quantum computers, such as D-Wave, Rigetti, IonQ, and a host of other startups. And rather than focusing on the famous algorithms from the 1990s, which require fault-tolerant, large-scale devices, a new class of algorithms has recently emerged that are specifically designed for noisy, intermediate-scale quantum, or “NISQ”, devices — that is, the types of devices that are most likely to come into existence in the near future.

Driven by this promise of near-term benefits, investment and excitement in the quantum computing space has grown to the point where there are entire conferences devoted to industrial and government applications of the technology, even attended by (gasp!) businesspeople and IT leaders. Beginning in 2017, an annual Quantum for Business conference has been held in the Bay Area, attracting hundreds of attendees from a variety of disciplines and industries. And on the east coast, the first Quantum.Tech conference in 2019 also saw hundreds of attendees.

It’s hard to say, but the hype is certainly not slowing down. Just this month, IBM announced the imminent launch of a 53-qubit quantum computer, to be publicly available over the cloud. And then, only two days later, news leaked that Google believes they have achieved a technical milestone known as “quantum supremacy” using their own 53-qubit device. While these devices still can’t do anything useful (as far as we know), the technology is advancing at a rate that makes many believe that useful applications will emerge within the next few years. This belief has led to the rise of quantum software and quantum consulting companies, such as QCWare and Zapata Computing, who work with clients to identify potential business use cases for quantum computing and to help educate their workforce to be quantum-ready. Companies like IBM and Microsoft also have extensive quantum outreach programs.

Despite all this progress, there is significant doubt among most experts in the community that today’s quantum computing technology — most commonly, qubits engineered on carefully-fabricated superconducting chips — will be scalable to the thousands or millions of qubits needed in the long-term to produce fault-tolerant quantum devices. So while there is good reason to be optimistic about the short-term applicability of the existing technology, we still need innovation! We are barely at the beginning of the quantum computing revolution. We need scientists to identify new potential qubit technologies, refine them in the lab, create companies to scale and commercialize them, and continue to drive the cycle of investment, in pursuit of the discovery and development of these quantum computing technologies that could quite literally change the world. (No pressure, though. If you don’t do it, someone else most certainly will.)

]]>And so, as companies have begun to develop early-stage quantum computers, nearly always the first attribute reported is the number of qubits that the device contains. 5 qubits. 10 qubits. 19 qubits. 72 qubits.

But wait — what does it mean to say that a quantum computer “has” 72 qubits? Should we think of this in terms of the “bitness” of processor architecture — as the amount of information that can be processed at once? Or is this somehow a quantum memory, and 72 qubits is the information storage capacity of the device? But either way, a quantum computer with 72 qubits must be far superior to one with 10 qubits, right? Answering these questions requires a better understanding of what a qubit actually is.

If you ask this question to someone knowledgeable, you’ll usually get some version of the following answer: “An ordinary bit can have a value of either 0 or 1, but a qubit can have a value that is any arbitrary superposition of 0 and 1.” Or, you’ll be shown a picture of a sphere, with 0 at the top and 1 at the bottom, and you’ll be told that a qubit can represent any point on the surface of that sphere. So for a qubit, there are not just two possible values, but infinitely many possible values! So of course quantum computers are more powerful!

That’s fine, and it’s a mathematically accurate description, but it does very little to help us understand conceptually how these “qubits” can help us. In fact, without some background in quantum physics, it can even be a little misleading. If one qubit has infinitely many possible values, doesn’t that mean we could just encode an arbitrarily large amount of information and store it inside a single qubit?

The key physical principle to understand here — and one of the things that really constrains how we can build a quantum computer, and how powerful they can be — is that when you try to read the value of a qubit, you will always either get a 0 or 1, and the value of the qubit itself also immediately “collapses” to 0 or 1. The information about the delicate superposition you encoded is completely lost. (This is an example of a paradox in quantum physics known as the measurement problem — the simple act of looking at the qubit actually changes its state!) So in other words, if you tried to encode the full-length Titanic movie into the state of one of your qubits, when you try to play it back, you’ll be sorely disappointed (or greatly relieved, depending on your feelings about the movie).

Ok, so if we have a 72-qubit quantum computer, and when we read those qubits we can get only 72 bits of information out, that doesn’t sound very exciting. Where’s the power of quantum computing? Well, the measurement problem only applies if we look at the qubits! So, obviously, we need to make things happen while we’re *not *looking. In other words, the qubits need to interact with one another directly.

From the discussion above, it is hopefully clear that we shouldn’t use the number of qubits as the primary measure of quantum computing power. Instead, the power of quantum computing must come from *interactions *among the qubits — so we should be looking at how many of the qubits can interact with one another at a time, how long these interactions take, and how reliable they are. In addition to this, we also need to know how long the qubits themselves can survive. Today’s quantum computers are very imperfect, and even small fluctuations in the environment (such as tiny, stray electric fields) can cause qubits to lose their information — a phenomenon known as “decoherence”. (What’s really important is the *ratio *of the qubit interaction time to the qubit lifetime, as this gives a rough idea of how many interactions we can reliably perform before decoherence destroys our quantum information. But this is a topic for another post.)

So what does it mean to say that qubits *interact *with each other? There’s not really a good analogy here to classical computers, since “bits” aren’t really objects, but are typically just electrical signals that flow freely along wires. In a quantum computer, however, a qubit is typically a real, physical object of some kind — for example, a single atom, a single defect in some host material, or a very tiny superconducting circuit. This means that if you have some number of qubits, they have to be physically arranged in some way — often either in a one-dimensional chain or in a two-dimensional grid — and the control system for the quantum computer must be able to very precisely control the state of each individual qubit, as well as turn “on” and “off” the interactions between various qubits. This ability to control interactions is often called the “connectivity” of the quantum computer. Depending on the type of qubit being used, the quantum computer may implement “nearest-neighbor” connectivity, where each qubit is able to interact only with those that are sitting adjacent to it in the 1-D or 2-D layout; “all-to-all” connectivity, where each qubit is able to interact with any other qubit in the system; or something in between.

Connectivity is important because it determines how efficiently we can run quantum algorithms, which typically require some complex series of interactions among all of the qubits in the device. If the qubits have all-to-all connectivity, these interactions can be implemented directly; but if the connectivity is limited (e.g., nearest-neighbor), then implementing an interaction between two qubits that are physically distant actually requires several interactions with intermediary qubits in order to achieve the desired effect. And because, as discussed above, the number of interactions we can run is limited by the qubit interaction time and the qubit lifetime, an increase in the required number of physical qubit interactions due to limited connectivity can significantly hinder the complexity of the quantum algorithms we can successfully run on the device.

This doesn’t have a cut-and-dried answer, but a meaningful comparison certainly needs to take into account all of these variables — qubit count, qubit connectivity, qubit interaction time, and qubit lifetime. One recent attempt at such a metric is known as “quantum volume”, introduced by several researchers at IBM. This attempts to assign a numerical value to a quantum computer that, very loosely, indicates the maximum number of qubits which can all successfully interact with one another and maintain a high probability of producing a correct result. It’s a bit clunky, and certainly less headline-friendly than a simple qubit count, but at least it’s a good-faith effort to capture the full picture. If you’d like to read more about this and other metrics, there’s a recent article in Science that describes some of the various techniques that companies and universities have been using: “How to evaluate computers that don’t quite exist.”

(In the longer term, when we have fault-tolerant quantum computers enabled by quantum error correction, we will likely group large sets of physical qubits together into “logical qubits”, which automatically maintain their quantum state and are robust to errors. At that time, we will in fact care much more about logical qubit count than physical qubit count. But, again, this is a topic for another day.)

All this is to say: Just as we can’t judge the performance of a modern CPU solely by its clock speed, there is far more to understanding the performance of today’s quantum computers than simply the number of qubits. We must compare quantum computers based on their performance on the algorithms that we care about.

]]>A few years ago, I took a detailed look at the scenarios in which technology companies decide to build complementary hardware and software products, and when these efforts are most likely to succeed. In general, the strategy pays off when the software product is:

**(a)** essential to the use of the hardware product, that is, either it is the only software in existence, or it is fundamentally better than other existing software options (think of early mainframe computers),**(b)** tightly integrated with a piece of specialized hardware to perform a function more efficiently than more general-purpose devices can accomplish (think of iPod and iTunes in the early days), and/or**(c)** core to the business, with the hardware product serving as a funnel to anchor users in the company’s ecosystem (think of Surface, Windows, and Office).

So how can we apply these insights to the world of quantum computing? Specifically, should we expect the companies building quantum hardware to have an inherent advantage in building the software for those devices, or is it possible that companies focused solely on software and applications will be able to achieve dominance?

Here, I think it’s worth pointing out a key difference between traditional (classical) computers and quantum computers: **Quantum computers are unlikely to ever be used as general-purpose computing devices — at least for the next several decades.** Yes, in theory, a quantum processor can do anything a classical processor can do, but our classical computing technology is so fast, so small, so cheap, and so satisfactory for most purposes, that it will require many orders of magnitude of advances in quantum technology to begin to even consider replacing classical computers in the broad sense. So, for now, we can think of quantum computers as special-purpose devices that are optimized for a few particular tasks — essentially, as “co-processors” whose operation will be controlled by a classical computing framework. This certainly describes the current area of research into near-term “hybrid quantum-classical” systems, but it will continue to be true even after quantum computers reach the stage of fault-tolerance and universality.

With this in mind, it’s relatively easy to see that the companies building quantum hardware should also be building the core software for these devices, since the scenario falls neatly into cases (a) and (b) in the framework laid out above. The two are necessarily intertwined, and it’s impossible for someone else to effectively build the core software — especially in a world where companies don’t even sell the quantum computing hardware itself, but only sell access to it over the cloud.

So is that the end of the story? Should all of the quantum software companies cease to exist? Of course not! The quantum computing hardware companies must build the core software, but this software is more analogous to a device driver — it provides access to the underlying hardware, but it doesn’t necessarily provide value on its own. It takes applications to do this. In fact, I would argue that in the most likely scenario, quantum software and applications companies will extract most of the value from the quantum computing market in the long-term — that is, I would argue that quantum software will indeed “eat” the quantum world. Reasons for this include:

**Quantum computers will eventually become a commodity.** Once quantum hardware has reached the stage of fault-tolerance, device-specific attributes become much less important. More and more companies will begin building universal quantum computers whose capabilities are all roughly equivalent. At this point, it helps to recall the history of the personal computer: IBM enjoyed enormous success initially, but as more and more companies began producing systems with equivalent capabilities, the unique value of the IBM PC began to disappear. This dynamic allowed a software company, Microsoft, to rise and become one of the dominant beneficiaries of the explosion of the PC industry.

**Most applications for quantum computers have not been discovered yet.** If we already knew everything a quantum computer could do, then the companies building the quantum computers could also build the applications, and everything would be done. But this certainly is not the case. In fact, most quantum computing companies today are focused on near-term algorithms and applications, and it’s not even clear yet whether there is anything useful that these near-term quantum computers can do. It’s far more likely that the most relevant applications will emerge later, once the hardware has matured and we have fault-tolerant devices. At that point, since the hardware will have already begun to commoditize, the companies discovering and developing these applications will likely be able to provide the most value to the marketplace.

Please comment with your thoughts and perspectives!

]]>No, this is not some bleak doomsday scenario where our enemies win the race to develop a quantum computer and thrust our society into a winter of defeat and despair.

Instead, it’s the fear that the hype around quantum computing will far outpace the realities, investors will get frustrated by the failure to meet the inflated expectations, and funding for the industry and associated research will collapse. Let’s take a look at the basis for this fear.

The most famous “winters” through the history of technology were the AI winters of the late 20th century. From the invention of the first learning algorithm — the perceptron in the late 1950s — the popular press was enamored with the potential of this new breed of technology. Famously, the New York Times reported in 1958 of the potential for a computer that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

Government funding for AI research exploded in the 1950s and 1960s, but people got frustrated (no robots yet? where are my robots?!), funding was cut, and the 1970s became sort of the first “winter” for AI. By the 1980s, funding had returned, and the field was again on an upslope. But experts worried that hype was again outpacing reality, and indeed, the late 1980s and 1990s brought another collapse in funding and the failure of many AI-focused companies.

Levels of government funding and industry investment in quantum computing are unprecedented, and it seems nearly weekly that some announcement of new funding is made. But those of us in the field know all too well that, for practical purposes, quantum computers are still completely useless. Sure, there’s a ton of great work being done which will pay dividends in the future, but most realists don’t expect widespread quantum adoption for practical problems for many years or even decades. Will investors be patient? We hope so. For every hype–filled article, there are plenty of experts trying to manage expectations and avoid inevitable disappointment.

This is by no means a unique situation. Gartner publishes a “Hype Cycle” every year for emerging technologies, with a prominent “Peak of Inflated Expectations” — a typically crowded list of over-hyped technologies just waiting for their proverbial bubble to burst. In their most recent analysis, quantum computing is still on the rising edge of this curve, almost unnoticeable among a tidal wave of AI-related technologies (seems like the AI spring has sprung).

So at some point, we should certainly expect the hype around quantum computing to subside. This is the typical trend for new technologies, and building a quantum computer is a long slog that has a much more extended time frame than, say, the next blockchain. The current hype is unsustainable. But does that also mean that, like with AI in the past, funding will dry up? Is a quantum winter is around the corner? I would argue that this is extremely unlikely.

**1. Technological maturity.** Quantum computing today is a far more mature field than AI was in the 1950s. Modern AI research didn’t really begin until around the 1940s, and so the field was only about a decade old when the first massive wave of investment came in the 1950s. People had grand dreams, but no one knew what AI would truly be capable of.

By contrast, quantum computing research began in earnest in the 1980s (spurred in part by Richard Feynman), and so at this point the field has nearly four full decades of research behind it. And the technological feasibility of quantum computing is not just wishful thinking (some people would beg to differ with this statement, but they are a small minority). The principles of quantum physics underlying quantum computing have been around since the 1920s and have been experimentally tested many times over the last century, with astonishing success. And quantum error correction — the key to making quantum computers fault-tolerant and scalable — has been on a firm mathematical foundation since Shor and Steane developed their codes in the 1990s.

**2. Frankenstein’s monster.** It’s easy for people’s imaginations to run wild when thinking about robots. When the perceptron was invented in the 1950s, no one had any realistic plan for developing a conscious machine. But people bought into this idea, in part because it had been the stuff of science fiction for so long. People had an intuitive idea of what this technology could look like, and what impact it could have. If you were expecting a walking, talking, reproducing robot, and all you got are a few algorithms that can classify images, you’d lose faith, too.

Most technologies are unable to capture the imagination like AI. This includes quantum computing. Sure, there is an international spy novel written about quantum computing (currently ranked #1492 on the Espionage Thrillers bestsellers list at Amazon!), and there are plenty of misunderstandings of what a quantum computer will be able to do, but we don’t run a serious risk of investors being influenced by their knowledge of science fiction.

**3. Factoring, factoring, factoring.** For those of us in the field, it has become cliché to mention Shor’s algorithm, by which quantum computers will be able to quickly factor extremely large integers, and thereby break the RSA encryption scheme that is used to secure basically everything on the Internet. And while most algorithms research today is focused on near-term applications of smaller (“NISQ-era“) quantum computers, it’s impossible to overstate the importance of factoring to the field as a whole. Essentially anything that’s transmitted over the Internet today (or for the foreseeable future, until a quantum-safe encryption standard broadly replaces RSA) — if an attacker wants to decrypt it, all they have to do is store the encrypted data and wait for a fault-tolerant quantum computer to exist. Sure, this may be decades away — but maybe not. The potential value to industry investors is enormous, and will no doubt be worth the risk and the wait. And no government can afford to take the chance that a rival nation might get a quantum computer first. As long as factoring remains the killer app of quantum computing (and assuming no one discovers an efficient factoring algorithm for classical computers), it’s hard to envision a scenario where funding for quantum computing dries up, despite the inevitable decline in hype.

Thoughts? Leave a note in the comments and I’d love to discuss.

]]>