Quantum Computing Beyond the Lab

For several decades, quantum computing was a purely academic pursuit. In the 1980s, a few forward-thinking theorists began to describe the principles underlying a new type of computer that used logic based on the laws of quantum physics, rather than the simple binary logic used by classical computers. In the 1990s, the field expanded rapidly as the first quantum algorithms emerged, proving that these quantum computers could actually provide a significant breakthrough in computational power for certain types of problems. But at that point, building an actual quantum computer was far beyond anyone’s capability.

Who’s interested in quantum computing?

Fast forward to today, and we still don’t have the ability to implement a large-scale quantum computer that can fulfill the promises of the 1990s, nor are we very close to doing so. Yet interest in quantum computing has exploded far beyond academia. In industry, sectors as varied as automotive, aerospace, pharmaceuticals, and finance have begun developing strategies for implementing quantum technology. And governments and militaries around the world are investing heavily to ensure that they do not fall behind in what some are calling the next “space race” for technological supremacy.

Fueling this interest are significant financial investments in promoting and developing quantum computing devices from a growing number of companies. This includes many of the large technology stalwarts, such as IBM, Google, Microsoft, Intel, and Honeywell. And it also includes companies that have formed specifically to build quantum computers, such as D-Wave, Rigetti, IonQ, and a host of other startups. And rather than focusing on the famous algorithms from the 1990s, which require fault-tolerant, large-scale devices, a new class of algorithms has recently emerged that are specifically designed for noisy, intermediate-scale quantum, or “NISQ”, devices — that is, the types of devices that are most likely to come into existence in the near future.

Driven by this promise of near-term benefits, investment and excitement in the quantum computing space has grown to the point where there are entire conferences devoted to industrial and government applications of the technology, even attended by (gasp!) businesspeople and IT leaders. Beginning in 2017, an annual Quantum for Business conference has been held in the Bay Area, attracting hundreds of attendees from a variety of disciplines and industries. And on the east coast, the first Quantum.Tech conference in 2019 also saw hundreds of attendees.

Does this mean we are at peak quantum hype?

It’s hard to say, but the hype is certainly not slowing down. Just this month, IBM announced the imminent launch of a 53-qubit quantum computer, to be publicly available over the cloud. And then, only two days later, news leaked that Google believes they have achieved a technical milestone known as “quantum supremacy” using their own 53-qubit device. While these devices still can’t do anything useful (as far as we know), the technology is advancing at a rate that makes many believe that useful applications will emerge within the next few years. This belief has led to the rise of quantum software and quantum consulting companies, such as QCWare and Zapata Computing, who work with clients to identify potential business use cases for quantum computing and to help educate their workforce to be quantum-ready. Companies like IBM and Microsoft also have extensive quantum outreach programs.

But we still need more research!

Despite all this progress, there is significant doubt among most experts in the community that today’s quantum computing technology — most commonly, qubits engineered on carefully-fabricated superconducting chips — will be scalable to the thousands or millions of qubits needed in the long-term to produce fault-tolerant quantum devices. So while there is good reason to be optimistic about the short-term applicability of the existing technology, we still need innovation! We are barely at the beginning of the quantum computing revolution. We need scientists to identify new potential qubit technologies, refine them in the lab, create companies to scale and commercialize them, and continue to drive the cycle of investment, in pursuit of the discovery and development of these quantum computing technologies that could quite literally change the world. (No pressure, though. If you don’t do it, someone else most certainly will.)

Why Qubit Count Is Not Everything

The qubit is often portrayed as the fundamental unit of information in a quantum computer. This seems natural, since in ordinary classical computers, information is always represented in bits and bytes — sequences of 0s and 1s. And therefore, bits and bytes are often used to measure everything from processor architecture (32-bit or 64-bit) to hard drive size (terabytes) to network speeds (gigabits per second).

And so, as companies have begun to develop early-stage quantum computers, nearly always the first attribute reported is the number of qubits that the device contains. 5 qubits. 10 qubits. 19 qubits. 72 qubits.

But wait — what does it mean to say that a quantum computer “has” 72 qubits? Should we think of this in terms of the “bitness” of processor architecture — as the amount of information that can be processed at once? Or is this somehow a quantum memory, and 72 qubits is the information storage capacity of the device? But either way, a quantum computer with 72 qubits must be far superior to one with 10 qubits, right? Answering these questions requires a better understanding of what a qubit actually is.

What is a qubit?

If you ask this question to someone knowledgeable, you’ll usually get some version of the following answer: “An ordinary bit can have a value of either 0 or 1, but a qubit can have a value that is any arbitrary superposition of 0 and 1.” Or, you’ll be shown a picture of a sphere, with 0 at the top and 1 at the bottom, and you’ll be told that a qubit can represent any point on the surface of that sphere. So for a qubit, there are not just two possible values, but infinitely many possible values! So of course quantum computers are more powerful!

That’s fine, and it’s a mathematically accurate description, but it does very little to help us understand conceptually how these “qubits” can help us. In fact, without some background in quantum physics, it can even be a little misleading. If one qubit has infinitely many possible values, doesn’t that mean we could just encode an arbitrarily large amount of information and store it inside a single qubit?

The key physical principle to understand here — and one of the things that really constrains how we can build a quantum computer, and how powerful they can be — is that when you try to read the value of a qubit, you will always either get a 0 or 1, and the value of the qubit itself also immediately “collapses” to 0 or 1. The information about the delicate superposition you encoded is completely lost. (This is an example of a paradox in quantum physics known as the measurement problem — the simple act of looking at the qubit actually changes its state!) So in other words, if you tried to encode the full-length Titanic movie into the state of one of your qubits, when you try to play it back, you’ll be sorely disappointed (or greatly relieved, depending on your feelings about the movie).

Ok, so if we have a 72-qubit quantum computer, and when we read those qubits we can get only 72 bits of information out, that doesn’t sound very exciting. Where’s the power of quantum computing? Well, the measurement problem only applies if we look at the qubits! So, obviously, we need to make things happen while we’re not looking. In other words, the qubits need to interact with one another directly.

Neighborly qubits

From the discussion above, it is hopefully clear that we shouldn’t use the number of qubits as the primary measure of quantum computing power. Instead, the power of quantum computing must come from interactions among the qubits — so we should be looking at how many of the qubits can interact with one another at a time, how long these interactions take, and how reliable they are. In addition to this, we also need to know how long the qubits themselves can survive. Today’s quantum computers are very imperfect, and even small fluctuations in the environment (such as tiny, stray electric fields) can cause qubits to lose their information — a phenomenon known as “decoherence”. (What’s really important is the ratio of the qubit interaction time to the qubit lifetime, as this gives a rough idea of how many interactions we can reliably perform before decoherence destroys our quantum information. But this is a topic for another post.)

So what does it mean to say that qubits interact with each other? There’s not really a good analogy here to classical computers, since “bits” aren’t really objects, but are typically just electrical signals that flow freely along wires. In a quantum computer, however, a qubit is typically a real, physical object of some kind — for example, a single atom, a single defect in some host material, or a very tiny superconducting circuit. This means that if you have some number of qubits, they have to be physically arranged in some way — often either in a one-dimensional chain or in a two-dimensional grid — and the control system for the quantum computer must be able to very precisely control the state of each individual qubit, as well as turn “on” and “off” the interactions between various qubits. This ability to control interactions is often called the “connectivity” of the quantum computer. Depending on the type of qubit being used, the quantum computer may implement “nearest-neighbor” connectivity, where each qubit is able to interact only with those that are sitting adjacent to it in the 1-D or 2-D layout; “all-to-all” connectivity, where each qubit is able to interact with any other qubit in the system; or something in between.

Connectivity is important because it determines how efficiently we can run quantum algorithms, which typically require some complex series of interactions among all of the qubits in the device. If the qubits have all-to-all connectivity, these interactions can be implemented directly; but if the connectivity is limited (e.g., nearest-neighbor), then implementing an interaction between two qubits that are physically distant actually requires several interactions with intermediary qubits in order to achieve the desired effect. And because, as discussed above, the number of interactions we can run is limited by the qubit interaction time and the qubit lifetime, an increase in the required number of physical qubit interactions due to limited connectivity can significantly hinder the complexity of the quantum algorithms we can successfully run on the device.

So what’s the right way to compare quantum computers?

This doesn’t have a cut-and-dried answer, but a meaningful comparison certainly needs to take into account all of these variables — qubit count, qubit connectivity, qubit interaction time, and qubit lifetime. One recent attempt at such a metric is known as “quantum volume”, introduced by several researchers at IBM. This attempts to assign a numerical value to a quantum computer that, very loosely, indicates the maximum number of qubits which can all successfully interact with one another and maintain a high probability of producing a correct result. It’s a bit clunky, and certainly less headline-friendly than a simple qubit count, but at least it’s a good-faith effort to capture the full picture. If you’d like to read more about this and other metrics, there’s a recent article in Science that describes some of the various techniques that companies and universities have been using: “How to evaluate computers that don’t quite exist.”

(In the longer term, when we have fault-tolerant quantum computers enabled by quantum error correction, we will likely group large sets of physical qubits together into “logical qubits”, which automatically maintain their quantum state and are robust to errors. At that time, we will in fact care much more about logical qubit count than physical qubit count. But, again, this is a topic for another day.)

All this is to say: Just as we can’t judge the performance of a modern CPU solely by its clock speed, there is far more to understanding the performance of today’s quantum computers than simply the number of qubits. We must compare quantum computers based on their performance on the algorithms that we care about.

Why Quantum Winter Is Not Coming

Amid the recent explosion of startups and venture capital investment into quantum computing, there has been much talk of an inevitable “quantum winter”.

Map of Winter Storm Quantum

No, this is not some bleak doomsday scenario where our enemies win the race to develop a quantum computer and thrust our society into a winter of defeat and despair.

Instead, it’s the fear that the hype around quantum computing will far outpace the realities, investors will get frustrated by the failure to meet the inflated expectations, and funding for the industry and associated research will collapse. Let’s take a look at the basis for this fear.

The famous AI winter

The most famous “winters” through the history of technology were the AI winters of the late 20th century. From the invention of the first learning algorithm — the perceptron in the late 1950s — the popular press was enamored with the potential of this new breed of technology. Famously, the New York Times reported in 1958 of the potential for a computer that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

Government funding for AI research exploded in the 1950s and 1960s, but people got frustrated (no robots yet? where are my robots?!), funding was cut, and the 1970s became sort of the first “winter” for AI. By the 1980s, funding had returned, and the field was again on an upslope. But experts worried that hype was again outpacing reality, and indeed, the late 1980s and 1990s brought another collapse in funding and the failure of many AI-focused companies.

Why we fear a quantum winter

Levels of government funding and industry investment in quantum computing are unprecedented, and it seems nearly weekly that some announcement of new funding is made. But those of us in the field know all too well that, for practical purposes, quantum computers are still completely useless. Sure, there’s a ton of great work being done which will pay dividends in the future, but most realists don’t expect widespread quantum adoption for practical problems for many years or even decades. Will investors be patient? We hope so. For every hypefilled article, there are plenty of experts trying to manage expectations and avoid inevitable disappointment.

This is by no means a unique situation. Gartner publishes a “Hype Cycle” every year for emerging technologies, with a prominent “Peak of Inflated Expectations” — a typically crowded list of over-hyped technologies just waiting for their proverbial bubble to burst. In their most recent analysis, quantum computing is still on the rising edge of this curve, almost unnoticeable among a tidal wave of AI-related technologies (seems like the AI spring has sprung).

So at some point, we should certainly expect the hype around quantum computing to subside. This is the typical trend for new technologies, and building a quantum computer is a long slog that has a much more extended time frame than, say, the next blockchain. The current hype is unsustainable. But does that also mean that, like with AI in the past, funding will dry up? Is a quantum winter is around the corner? I would argue that this is extremely unlikely.

Why quantum computing will not suffer the fate of AI

1. Technological maturity. Quantum computing today is a far more mature field than AI was in the 1950s. Modern AI research didn’t really begin until around the 1940s, and so the field was only about a decade old when the first massive wave of investment came in the 1950s. People had grand dreams, but no one knew what AI would truly be capable of.

By contrast, quantum computing research began in earnest in the 1980s (spurred in part by Richard Feynman), and so at this point the field has nearly four full decades of research behind it. And the technological feasibility of quantum computing is not just wishful thinking (some people would beg to differ with this statement, but they are a small minority). The principles of quantum physics underlying quantum computing have been around since the 1920s and have been experimentally tested many times over the last century, with astonishing success. And quantum error correction — the key to making quantum computers fault-tolerant and scalable — has been on a firm mathematical foundation since Shor and Steane developed their codes in the 1990s.

2. Frankenstein’s monster. It’s easy for people’s imaginations to run wild when thinking about robots. When the perceptron was invented in the 1950s, no one had any realistic plan for developing a conscious machine. But people bought into this idea, in part because it had been the stuff of science fiction for so long. People had an intuitive idea of what this technology could look like, and what impact it could have. If you were expecting a walking, talking, reproducing robot, and all you got are a few algorithms that can classify images, you’d lose faith, too.

Most technologies are unable to capture the imagination like AI. This includes quantum computing. Sure, there is an international spy novel written about quantum computing (currently ranked #1492 on the Espionage Thrillers bestsellers list at Amazon!), and there are plenty of misunderstandings of what a quantum computer will be able to do, but we don’t run a serious risk of investors being influenced by their knowledge of science fiction.

3. Factoring, factoring, factoring. For those of us in the field, it has become cliché to mention Shor’s algorithm, by which quantum computers will be able to quickly factor extremely large integers, and thereby break the RSA encryption scheme that is used to secure basically everything on the Internet. And while most algorithms research today is focused on near-term applications of smaller (“NISQ-era“) quantum computers, it’s impossible to overstate the importance of factoring to the field as a whole. Essentially anything that’s transmitted over the Internet today (or for the foreseeable future, until a quantum-safe encryption standard broadly replaces RSA) — if an attacker wants to decrypt it, all they have to do is store the encrypted data and wait for a fault-tolerant quantum computer to exist. Sure, this may be decades away — but maybe not. The potential value to industry investors is enormous, and will no doubt be worth the risk and the wait. And no government can afford to take the chance that a rival nation might get a quantum computer first. As long as factoring remains the killer app of quantum computing (and assuming no one discovers an efficient factoring algorithm for classical computers), it’s hard to envision a scenario where funding for quantum computing dries up, despite the inevitable decline in hype.

Thoughts? Leave a note in the comments and I’d love to discuss.