Update(April 19): Yilei Chen announced the discovery of a bug in the algorithm, which he does not know how to fix. This was independently discovered by Hongxun Wu and Thomas Vidick. At present, the paper does not provide a polynomial-time algorithm for solving LWE.
If you’re a normal person — that is, a person who doesn’t obsessively follow the latest cryptography news — you probably missed last week’s cryptography bombshell. That news comes in the form of a new e-print authored by Yilei Chen, “Quantum Algorithms for Lattice Problems“, which has roiled the cryptography research community. The result is now being evaluated by experts in lattices and quantum algorithm design (and to be clear, I am not one!) but if it holds up, it’s going to be quite a bad day/week/month/year for the applied cryptography community.
Rather than elaborate at length, here’s quick set of five bullet-points giving the background.
(1) Cryptographers like to build modern public-key encryption schemes on top of mathematical problems that are believed to be “hard”. In practice, we need problems with a specific structure: we can construct efficient solutions for those who hold a secret key, or “trapdoor”, and yet also admit no efficient solution for folks who don’t. While many problems have been considered (and often discarded), most schemes we use today are based on three problems: factoring (the RSA cryptosystem), discrete logarithm (Diffie-Hellman, DSA) and elliptic curve discrete logarithm problem (EC-Diffie-Hellman, ECDSA etc.)
(2) While we would like to believe our favorite problems are fundamentally “hard”, we know this isn’t really true. Researchers have devised algorithms that solve all of these problems quite efficiently (i.e., in polynomial time) — provided someone figures out how to build a quantum computer powerful enough to run the attack algorithms. Fortunately such a computer has not yet been built!
(3) Even though quantum computers are not yet powerful enough to break our public-key crypto, the mere threat of future quantum attacks has inspired industry, government and academia to join forces Fellowship-of-the-Ring-style in order to tackle the problem right now. This isn’t merely about future-proofing our systems: even if quantum computers take decades to build, future quantum computers could break encrypted messages we send today!
(4) One conspicuous outcome of this fellowship is NIST’s Post-Quantum Cryptography (PQC) competition: this was an open competition designed to standardize “post-quantum” cryptographic schemes. Critically, these schemes must be based on different mathematical problems — most notably, problems that don’t seem to admit efficient quantum solutions.
(5) Within this new set of schemes, the most popular class of schemes are based on problems related to mathematical objects called lattices. NIST-approved schemes based on lattice problems include Kyber and Dilithium (which I wrote about recently.) Lattice problems are also the basis of several efficient fully-homomorphic encryption (FHE) schemes.
This background sets up the new result.
Chen’s (not yet peer-reviewed) preprint claims anew quantum algorithm that efficiently solves the “shortest independent vector problem” (SIVP, as well as GapSVP) in lattices with specific parameters. If it holds up, the result could (with numerous important caveats) allow future quantum computers to break schemes that depend on the hardness of specific instances of these problems. The good news here is that even if the result is correct, the vulnerable parameters are very specific: Chen’s algorithm does not immediately apply to the recently-standardized NIST algorithms such as Kyber or Dilithium. Moreover, the exact concrete complexity of the algorithm is not instantly clear: it may turn out to be impractical to run, even if quantum computers become available.
But there is a saying in our field that attacks only get better. If Chen’s result can be improved upon, then quantum algorithms could render obsolete an entire generation of “post-quantum” lattice-based schemes, forcing cryptographers andindustryback to the drawing board.
In other words, both a great technical result — and possibly a mild disaster.
As previously mentioned: I am neither an expert in lattice-based cryptography nor quantum computing. The folks who are those things are very busy trying to validate the writeup: and more than a few big results have fallen apart upon detailed inspection.For those searching for the latest developments, here’s a nice writeup by Nigel Smart that doesn’t tackle the correctness of the quantum algorithm (see updates at the bottom), but does talk about the possible implications for FHE and PQC schemes (TL;DR: bad for some FHE schemes, but really depends on the concrete details of the algorithm’s running time.) And here’s another brief note on a “bug” that was found in the paper, that seems to have been quickly addressed by the author.
Up until this week I had intended to write another long wonky post about complexity theory, lattices, and what it all meant for applied cryptography. But now I hope you’ll forgive me if I hold onto that one, for just a little bit longer.
Unitards: another consequence
of quantum computing.
Back in December I asked readers for some topics they were particularly keen to read about on this blog. One of the best (and sadly, most challenging) suggestions was to say something about post-quantum cryptography. Roughly speaking, this term describes the future of cryptography after quantum computers arrive and screw things up for everyone.
Now, just for the record, quantum computing is not a bad thing; in fact, it’s one of the most exciting discoveries of our time. At minimum QCs will help us efficiently solve challenging problems that take forever on today’s computers. It’s entirely possible, though not guaranteed, that they’ll totally revolutionalize the way we compute (see this breathless article). Whichever it is, quantum computers will be a general good.
But not necessarily for cryptographers.
This is because cryptographers don’t always want faster algorithms for solving hard problems. It isn’t that we thrive on people’s frustration; it’s that many of our best cryptosystems simply depend on the hardness of problems like integer factorization. By making these problems easy, QCs throw a huge wrench into the works. And this isn’t a hypothetical concern: thanks to Shor’s algorithm, many of our most beloved cryptosystems are already living on borrowed time.
But all is not lost. We have two very important things working in our favor. The first is time. For all the excitement around quantum computing, nobody’s managed to actually builda useful one. And (no matter what D-Wave says) it doesn’t look like they will anytime soon.
The second is that quantum computers are not magic. QCs are very good at quickly solving certain problems, but their usefulness is more limited than most people realize. Moreover, we only know of a handful of useful quantum algorithms today. This could change: think how many classical algorithms were discovered after classical computers were built. But it gives us hope.
In the rest of this post I’m going to try to give a flavor of the problem, and point out some of the bright spots — areas where we feel pretty good. I want to be clear that this isn’t my research area, and many of the best ideas in this post come from the excellent book Post-Quantum Cryptography, edited by Bernstein, Buchmann and Dahmen. (A dead-tree copy of the book will run you $90, but you can check out Dan Berstein’s fantastic intro right here.)
Quantum vs. Classical
Quantum computers are only human. (Well, technically,
they’re complicated machines full of super-cooled liquid
nitrogen and helium.)
There are many ways to build a classical computer. Whether you use transistors, gears, or photons, one thing binds all of these designs together: at any given moment, a classical computer should be in exactly one state.
To bring this back to cryptography, let’s say you’re trying to brute-force your way through the keyspace of a cryptosystem. In a classical computer this means trying one key at a time, for as many keys as it takes to get a useful result. Parallelization can speed this up, but ultimately just moves the problem sideways — trading resources for time.
Quantum computers use an entirely different model of computing. The fundamental component of a quantum computer is a quantum bit (‘qubit’), a value that can be in two states (1 and 0) at the same time. By entangling n qubits, it’s possible to construct a computer that exists in a ‘superposition’ of 2^n states simultaneously. And this is when things get interesting.
And complicated. Entrangling qubits is one thing (and not necessarily an easy thing). The real trick is finding a way to do computation with a computer that’s in more than one state, and — even more importantly — to pull a single ‘right’ answer back into our classical world once the computation is done.
These words obscure a lot of detail; an entire field’s worth of detail. What matters for our purposes is that insome cases it seems like quantum algorithms can harness this power to accomplish things much faster than classical algorithms. When this happens, it makes certain problems much easier (i.e., less time-consuming) to solve. But not every problem.
Quantum computers are not magic!
The important thing to know is that quantum computers can’t do everything. They can’t see the future. They can’t even (we believe) solve NP-complete problems in polynomial time. In fact, the general belief among researchers is that the class BQP — roughly, things that quantum computers can do efficiently — does not include all of NP (though in fairness, nobody knows for sure).
Proposed hierarchy of complexity classes
with a guess for where BQP may fit. (Wikipedia)
More importantly, anything that a quantum computer can do can also be done by a classical computer, though possibly with an exponential slowdown. This means that QCs aren’t going to do anything against information-theoretically secure cryptosystems, like the One Time Pad. Those schemes don’t care how powerful the attacker’s computer is.
Given these facts, you might not all that impressed with quantum computers right now. You’d be wrong.
An algorithm for a computer that doesn’t exist
The reason we’re so freaked out about quantum computing
Dr. Peter Shor
is because of the harmless-looking gentleman above. Dr. Peter Shor claims responsibility for one of the most amazing, yet ostensibly useless, theoretical accomplishments in the history of computing: he developed a polynomial-time algorithm for factoring large integers. The only problem is that nobody has a computer that can run it.
I can only imagine how he explains this to his neighbors.
Shor’s algorithm actually has two components: (1) a classical portion that converts the factoring problem into an order-finding problem. And (2) a quantum portion that solves the latter in polynomial time (with some reasonable error probability). Shor’s algorithm can also be used to solve the Discrete Logarithm Problem (DLP) in fields and (with tweaks) elliptic curve groups.
The problem? Almost everypractical public-key cryptosystem in use today depends on the hardness of the above problems. That includes RSA and Diffie-Hellman (both used in TLS), Elgamal (used in PGP), DSA and ECDSA (used to sign X.509 certificates, among other things), and even the Rabin cryptosystem (used primarily by Bram Cohen).
These are important algorithms, used today not just to protect short-term data, but also to encrypt data that may be still be important several decades from now. If anyone manages to build a reasonable quantum computer in that time, some people are going to be pretty unhappy.
I should quickly mention that Shor’s algorithm is not the only quantum algorithm that cryptographers care about — it just happens to be the most interesting one. There’s also Grover’s algorithm, which can be used to improve brute-force attacks on symmetric encryption schemes like AES. But the speedup from Grover’s algorithm is ‘only’ quadratic (it essentially halves the key length), not exponential like Shor’s algorithm. We can live with quadratic.
So what can we do about it?
With all the background out of the way, we can finally get to the point of this post. What can cryptographers do to deal with the looming potential of quantum computers? And when should we start doing it? There are a few answers to this question.
Solution #1: Business as usual
This is probably my favorite solution. Don’t do anything. Quantum computing is a long way off, and it may never happen at all. Let’s just keep doing what we’re doing, until we’re old enough that someone gives us a pension and we can dump the problem in someone else’s lap. Besides, isn’t there an asteroid heading towards the Earth?
No, this isn’t really a solution at all, but it seems to be the default mentality right now. It probably isn’t hurting us — for the moment.
Solution #2: Don’t overreact
While Shor’s algorithm is a pretty serious threat to RSA, (EC)DSA and most of the public-key algorithms we use in practice, these aren’t the only algorithms that we use. And the good news is that quantum computers probably won’t have the same effect on those.
AES, for example, would be affected by Grover’s algorithm, but only (in the absolute absurdly worst case), by reducing the effective key size by half. So a 256-bit key would still provide at least the same security margin as today’s 128-bit keys. We can probably live with that — and if not, we can always get bigger keys.
The only hole in this argument is that we have absolutely no idea what specific attacks people will develop against ciphers like AES in the future . Even now — using only classical algorithms — researchers have managed to shave a couple of bits off of the AES security margin. Quantum computers may substantially expand our toolbox.
If you haven’t been to a crypto conference lately, you haven’t been exposed to the new hotness in our field: lattice-based cryptography.Lattices are a mathematical structure that happens to turn up a lot in nature: for example, in the ordering of atoms in a crystal.
In a more abstract setting, lattice ‘problems’ — can be used to build a wide variety of cryptosystems. The best known examples include the NTRU scheme (used for encryption and signature) and Craig Gentry’s fully-homomorphic encryption scheme. But there are plenty more.
Lattices get a lot of attention these days, in part because you can do neat things with them (fully-homomorphic encryption is a great example). But a more general argument for lattice-based crypto is its resistance to quantum attacks. In contrast to factoring and DLP, there are no quantum algorithms known to run (substantially) faster than classical algorithms against the lattice problems we use to construct modern cryptosystems.
As with all of these claims, the keyword is ‘known’. Will we someday find such algorithms? Who knows. But deploying schemes that aren’t quantum-broken (yet) certainly feels a lot better than deploying schemes that are.
The last, and possibly most desperate solution of all, is to dispense with computational problems altogether, and simply use the strength of quantum mechanics against itself.
This type of cryptography is known Quantum Key Distribution. QKD uses quantum-mechanical effects, rather than hard computational problems, to protect messages transmitted over a long distance.
The basic idea is quite simple: anyone who measures the quantum state of a particle must by definition change it (physics students: you’re welcome to correct me here!). The exception is entangled particles, which will both measure identically, even if the measurement is conducted at distant points. QKD rigs entangle particles, then transmit one set to a remote party via a long fiber-optic cable. If an attacker taps the cable and measures the particle, his tampering will be detected. If not, both parties can derive a random one-time pad, which they can then use to communicate. Quantum computers can’t do anything about this.
Unfortunately the more ‘perfect’ something sounds, the less you should trust it, especially if it’s a research system. This is what QKD researchers discovered last summer, when a team showed how to override the photon detectors in a QKD system with adversarially-generated signals of their own. The result: they were able to completely intercept the key without detection. This can all be fixed, but it goes to show that this is a harder problem than it looks.
More to the point, QKD requires a massive change in the way we distribute data. Today we send our ciphertexts through packet-switched networks, but QKD systems require dedicated fiber-optic connections between each participant. Thus, QKD isn’t likely to give us a good replacement for TLS anytime soon.
In summary
This has been yet another long post. Before I wrap up I just want to preemptively apologize to all of the physicists and quantum complexity theorists whose work I’ve mangled along the way.
No matter how terribly I’ve done this, the basic story is sound: quantum computers are at least theoretically possible, they’re quite likely to be practical, and they really will change a lot of things when they arrive. Mostly for the better. But not for the cryptosystems we use today.
If we don’t find a way to do something about this, the next few decades are going to be a very interesting time for crypto. For this reason alone, I’m hoping that asteroid isn’t coming; I’d really like to see how it all turns out.