Attack of the week: searchable encryption and the ever-expanding leakage function

A few days ago I had the pleasure of hosting Kenny Paterson, who braved snow and historic cold (by Baltimore standards) to come talk to us about encrypted databases.

Kenny’s newest result is with first authors Paul Grubbs, Marie-Sarah Lacharité and Brice Minaud (let’s call it GLMP). It isn’t so much about building encrypted databases, as it is about the risks of building them badly. And — for reasons I will get into shortly — there have been a lot of badly-constructed encrypted database schemes going around. What GLMP point out is that this weakness isn’t so much a knock against the authors of those schemes, but rather, an indication that they may just be trying to do the impossible.

Hopefully this is a good enough start to get you drawn in. Which is excellent, because I’m going to need to give you a lot of background.

What’s an “encrypted” database, and why are they a problem?

Databases (both relational and otherwise) are a pretty important part of the computing experience. Modern systems make vast use of databases and their accompanying query technology in order to power just about every software application we depend on.

Because these databases often contain sensitive information, there has been a strong push to secure that data. A key goal is to encrypt the contents of the database, so that a malicious database operator (or a hacker) can’t get access to it if they compromise a single machine. If we lived in a world where security was all that mattered, the encryption part would be pretty easy: database records are, after all, just blobs of data — and we know how to encrypt those. So we could generate a cryptographic key on our local machine, encrypt the data before we upload it to a vulnerable database server, and just keep that key locally on our client computer.

Voila: we’re safe against a database hack!

The problem with this approach is that encrypting the database records leaves us with a database full of opaque, unreadable encrypted junk. Since we have the decryption key on our client, we can decrypt and read those records after we’ve downloaded them. But this approach completely disables one of the most useful features of modern databases: the ability for the database server itself to search (or query) the database for specific records, so that the client doesn’t have to.

Unfortunately, standard encryption borks search capability pretty badly. If I want to search a database for, say, employees whose salary is between $50,000 and $100,000, my database is helpless: all it sees is row after row of encrypted gibberish. In the worst case, the client will have to download all of the data rows and search them itself — yuck.

This has led to much wailing and gnashing of teeth in the database community. As a result, many cryptographers (and a distressing number of non-cryptographers) have tried to fix the problem with “fancier” crypto. This has not gone very well.

It would take me a hundred years to detail all of various solutions that have been put forward. But let me just hit a few of the high points:

  • Some proposals have suggested using deterministic encryption to encrypt database records. Deterministic encryption ensures that a given plaintext will always encrypt to a single ciphertext value, at least for a given key. This enables exact-match queries: a client can simply encrypt the exact value (“John Smith”) that it’s searching for, and ask the database to identify encrypted rows that match it.
  • Of course, exact-match queries don’t support more powerful features. Most databases also need to support range queries. One approach to this is something called order revealing encryption (or its weaker sibling, order preserving encryption). These do exactly what they say they do: they allow the database to compare two encrypted records to determine which plaintext is greater than the other.
  • Some people have proposed to use trusted hardware to solve these problems in a “simpler” way, but as we like to say in cryptography: if we actually had trusted hardware, nobody would pay our salaries. And, speaking more seriously, even hardware might not stop the leakage-based attacks discussed below.

This summary barely scratches the surface of this problem, and frankly you don’t need to know all the details for the purpose of this blog post.

What you do need to know is that each of the above proposals entails has some degree of “leakage”. Namely, if I’m an attacker who is able to compromise the database, both to see its contents and to see how it responds when you (a legitimate user) makes a query, then I can learn something about the data being queried.

What some examples of leakage, and what’s a leakage function?

Leakage is a (nearly) unavoidable byproduct of an encrypted database that supports queries. It can happen when the attacker simply looks at the encrypted data, as she might if she was able to dump the contents of your database and post them on the dark web. But a more powerful type of leakage occurs when the attacker is able to compromise your database server and observe the query interaction between legitimate client(s) and your database.

Take deterministic encryption, for instance.

Deterministic encryption has the very useful, but also unpleasant feature that the same plaintext will always encrypt to the same ciphertext. This leads to very obvious types of leakage, in the sense that an attacker can see repeated records in the dataset itself. Extending this to the active setting, if a legitimate client queries on a specific encrypted value, the attacker can see exactly which records match the attacker’s encrypted value. She can see how often each value occurs, which gives and indication of what value it might be (e.g., the last name “Smith” is more common than “Azriel”.) All of these vectors leak valuable information to an attacker.

Other systems leak more. Order-preserving encryption leaks the exact order of a list of underlying records, because it causes the resulting ciphertexts to have the same order. This is great for searching and sorting, but unfortunately it leaks tons of useful information to an attacker. Indeed, researchers have shown that, in real datasets, an ordering can be combined with knowledge about the record distribution in order to (approximately) reconstruct the contents of an encrypted database.

Fancier order-revealing encryption schemes aren’t quite so careless with your confidentiality: they enable the legitimate client to perform range queries, but without leaking the full ordering so trivially. This approach can leak less information: but a persistent attacker will still learn some data from observing a query and its response — at a minimum, she will learn which rows constitute the response to a query, since the database must pack up the matching records and send them over to the client.

If you’re having trouble visualizing what this last type of leakage might look like, here’s a picture that shows what an attacker might see when a user queries an unencrypted database vs. what the attacker might see with a really “good” encrypted database that supports range queries:

leakage

So the TL;DR here is that many encrypted database schemes have some sort of “leakage”, and this leakage can potentially reveal information about (a) what a client is querying on, and (b) what data is in the actual database.

But surely cryptographers don’t build leaky schemes?

Sometimes the perfect is the enemy of the good.

Cryptographers could spend a million years stressing themselves to death over the practical impact of different types of leakage. They could also try to do things perfectly using expensive techniques like fully-homomorphic encryption and oblivious RAM — but the results would be highly inefficient. So a common view in the field is researchers should do the very best we can, and then carefully explain to users what the risks are.

For example, a real database system might provide the following guarantee:

“Records are opaque. If the user queries for all records BETWEEN some hidden values X AND Y then all the database will learn is the row numbers of the records that match this range, and nothing else.”

This is a pretty awesome guarantee, particularly if you can formalize it and prove that a scheme achieves it. And indeed, this is something that researchers have tried to do. The formalized description is typically achieved by defining something called a leakage function. It might not be possible to prove that a scheme is absolutely private, but we can prove that it only leaks as much as the leakage function allows.

Now, I may be overdoing this slightly, but I want to be very clear about this next part:

Proving your encrypted database protocol is secure with respect to a specific leakage function does not mean it is safe to use in practice. What it means is that you are punting that question to the application developer, who is presumed to know how this leakage will affect their dataset and their security needs. Your leakage function and proof simply tell the app developer what information your scheme is (provably) going to protect, and what it won’t.

The obvious problem with this approach is that application developers probably don’t have any idea what’s safe to use either. Helping them to figure this out is one goal of this new GLMP paper and its related work.

So what leaks from these schemes?

GLMP don’t look at a specific encryption scheme. Rather, they ask a more general question: let’s imagine that we can only see that a legitimate user has made a range query — but not what the actual queried range values are. Further, let’s assume we can also see which records the database returns for that query, but not their actual values.

How much does just this information tell us about the contents of the database?

You can see that this is a very limited amount of leakage. Indeed, it is possibly the least amount of leakage you could imagine for any system that supports range queries, and is also efficient. So in one sense, you could say authors are asking a different and much more important question: are any of these encrypted databases actually secure?

The answer is somewhat worrying.

Can you give me a simple, illuminating example?

Let’s say I’m an attacker who has compromised a database, and observes the following two range queries/results from a legitimate client:

Query 1: SELECT * FROM Salaries BETWEEN ⚙️ and 🕹    Result 1: (rows 1, 3, 5)
Query 2: SELECT * FROM Salaries BETWEEN 😨 and 🎱    Result 2: (rows 1, 43, 3, 5)

Here I’m using the emoji to illustrate that an attacker can’t see the actual values submitted within the range queries — those are protected by the scheme — nor can she see the actual values of the result rows, since the fancy encryption scheme hides all this stuff. All the attacker sees is that a range query came in, and some specific rows were scooped up off disk after running the fancy search protocol.

So what can the attacker learn from the above queries? Surprisingly: quite a bit.

At very minimum, the attacker learns that Query 2 returned all of the same records as Query 1. Thus the range of the latter query clearly somewhat overlaps with the range of the former.  There is an additional record (row 43) that is not within the range of Query 1. That tells us that row 43 must must be either the “next” greater or smaller record than each of rows (1, 3, 5). That’s useful information.

Get enough useful information, it turns out that it starts to add up. In 2016, Kellaris, Kollios, Nissim and O’Neill showed that if you know the distribution of the query range endpoints — for example, if you assumed that they were uniformly random — then you can get more than just the order of records. You can reconstruct the exact value of every record in the database.

This result is statistical in nature. If I know that the queries are uniformly random, then I can model how often a given value (say, Age=34 out of a range 1-120) should be responsive to a given random query results. By counting the actual occurrences of a specific row after many such queries, I can guess which rows correlate to specific record values. The more queries I see, the more certain I can be.The Kellaris et al. paper shows that this takes O(N^4~log~N) queries, where is the number of possible values your data can take on (e.g., the ages of your employees, ranging between 1 and 100 would give N=100.) This is assuming an arbitrary dataset. The results get much better if the database is “dense”, meaning every possible value occurs once.

In practice the Kellaris et al. results mean that database fields with small domains (like ages) could be quickly reconstructed after observing a reasonable number of queries from a legitimate user, albeit one who likes to query everything randomly.

So that’s really bad!

The main bright spot in this research —- at least up until recently — was that many types of data have much larger domains. If you’re dealing with salary data ranging from, say, $1 to $200,000, then N=200,000 and this dominant N^4 tends to make Kellaris et al. attacks impractical, simply because they’ll take too long. Similarly, data like employee last names (encoded as a form that can be sorted and range-queries) gives you even vaster domains like N=26^{12}, say, and so perhaps we could pleasantly ignore these results and spend our time on more amusing engagements.

I bet we can’t ignore these results, can we?

Indeed, it seems that we can’t. The reason we can’t sit on our laurels and hope for an attacker to die of old age recovering large-domain data sets is due to something called approximate database reconstruction, or \epsilon-ADR.

The setting here is the same: an attacker sits and watches an attacker make (uniformly random) range queries. The critical difference is that this attacker isn’t trying to get every database record back at its exact value: she’s willing to tolerate some degree of error, up to an additive \epsilon N. For example, if I’m trying to recover employee salaries, I don’t need them to be exact: getting them within 1% or 5% is probably good enough for my purposes. Similarly, reconstructing nearly all of the letters in your last name probably lets me guess the rest, especially if I know the distribution of common last names.

Which finally brings us to this new GLMP paper, which puts \epsilon-ADR on steroids. What it shows is that the same setting, if one is willing to “sacrifice” a few of the highest and lowest values in the database, an attacker can reconstruct nearly the full database in a much smaller (asymptotic) number of queries, specifically: O(\epsilon^{-2} log~\epsilon^{-1}) queries, where \epsilon is the error parameter.

The important thing to notice about these results is that the value N has dropped out of the equation. The only term that’s left is the error term \epsilon. That means these results are “scale-free”, and (asymptotically, at least), they work just as well for small values of N as large ones, and large databases and small ones. This is really remarkable.

Big-O notation doesn’t do anything for me: what does this even mean?

Big-O notation is beloved by computer scientists, but potentially meaningless in practice. There could be huge constants in these terms that render these attacks completely impractical. Besides, weird equations involving epsilon characters are impossible for humans to understand.

Sometimes the easiest way to understand a theoretical result is to plug some actual numbers in and see what happens. GLMP were kind enough to do this for us, by first generating several random databases — each containing 1,000 records, for different values of N. They then ran their recovery algorithm against a simulated batch of random range queries to see what the actual error rate looked like as the query count increased.

Here are their results:

GLMPgraph
Experimental results (Figure 2) from Grubbs et al. (GLMP, 2019). The Y-axis represents the measured error between the reconstructed database and the actual dataset (smaller is better.) The X-axis represents the number of queries. Each database contains 1,000 records, but there are four different values of N tested here. Notice that the biggest error occurs around the very largest and smallest values in the dataset, so the results are much better if one is willing to “sacrifice” these values.

Even after just 100 queries, the error in the dataset has been hugely reduced, and after 500 queries the contents of the database — excluding the tails — can be recovered with only about a 1-2% error rate.

Moreover, these experimental results illustrate the fact that recovery works at many scales: that is, they work nearly as well for very different values of N, ranging from 100 to 100,000. This means that the only variable you really need to think about as an attacker is: how close do I need my reconstruction to be? This is probably not very good news for any real data set.

How do these techniques actually work?

The answer is both very straightforward and deeply complex. The straightforward part is simple; the complex part requires an understanding of Vapnik-Chervonenkis learning theory (VC-theory) which is beyond the scope of this blog post, but is explained in the paper.

At the very highest level the recovery approach is similar to what’s been done in the past: using response probabilities to obtain record values. This paper does it much more efficiently and approximately, using some fancy learning theory results while making a few assumptions.

At the highest level: we are going to assume that the range queries are made on random endpoints ranging from 1 to N. This is a big assumption, and more on it later! Yet with just this knowledge in hand, we learn quite a bit. For example: we can compute the probability that a potential record value (say, the specific salary $34,234) is going to be sent back, provided we know the total value lies in the range 1-N (say, we know all salaries are between $1 and $200,000).

If we draw the resulting probability curve in freehand, it might look something like the chart below. This isn’t actually to scale or (probably) even accurate, but it illustrates a key point: by the nature of (random) range queries, records near the center are going to have a higher overall chance of being responsive to any given query, since the “center” values are more frequently covered by random ranges, and records near the extreme high- and low values will be chosen less frequently.

badgraph
I drew this graph freehand to mimic a picture in Kenny’s slides. Not a real plot!

The high-level goal of database reconstruction is to match the observed response rate for a given row (say, row 41) to the number of responses we’d expect see for different specific concrete values in the range. Clearly the accuracy of this approach is going to depend on the number of queries you, the attacker, can observe — more is better. And since the response rates are lower at the highest and lowest values, it will take more queries to guess outlying data values.

You might also notice that there is one major pitfall here. Since the graph above is symmetric around its midpoint, the expected response rate will be the same for a record at .25*N and a record at .75*N — that is, a $50,000 salary will be responsive to random queries at precisely same rate as a $150,000 salary. So even if you get every database row pegged precisely to its response rate, your results might still be “flipped” horizontally around the midpoint. Usually this isn’t the end of the world, because databases aren’t normally full of unstructured random data — high salaries will be less common than low salaries in most organizations, for example, so you can probably figure out the ordering based on that assumption. But this last “bit” of information is technically not guaranteed to come back, minus some assumptions about the data set.

Thus, the recovery algorithm breaks down into two steps: first, observe the response rate for each record as random range queries arrive. For each record that responds to such a query, try to solve for a concrete value that minimizes the difference between the expected response rate on that value, and the observed rate. The probability estimation can be made more efficient (eliminating a quadratic term) by assuming that there is at least one record in the database within the range .2N-.3N (or .7N-.8N, due to symmetry). Using this “anchor” record requires a mild assumption about the database contents.

What remains is to show that the resulting attack is efficient. You can do this by simply implementing it — as illustrated by the charts above. Or you can prove that it’s efficient. The GLMP paper uses some very heavy statistical machinery to do the latter. Specifically, they make use of a result from Vapnik-Chervonenkis learning theory (VC-theory), which shows that the bound can be derived from something called the VC-dimension (which is a small number, in this case) and is unrelated to the actual value of N. That proof forms the bulk of the result, but the empirical results are also pretty good.

Is there anything else in the paper?

Yes. It gets worse. There’s so much in this paper that I cannot possibly include it all here without risking carpal tunnel and boredom, and all of it is bad news for the field of encrypted databases.

The biggest additional result is one that shows that if all you want is an approximate ordering of the database rows, then you can do this efficiently using something called a PQ tree. Asymptotically, this requires O(\epsilon^{-1} log~\epsilon^{-1}) queries, and experimentally the results are again even better than one would expect.

What’s even more important about this ordering result is that it works independently of the query distribution. That is: we do not need to have random range queries in order for this to work: it works reasonably well regardless of how the client puts its queries together (up to a point).

Even better, the authors show that this ordering, along with some knowledge of the underlying database distribution — for example, let’s say we know that it consists of U.S. citizen last names — can also be used to obtain approximate database reconstruction. Oy vey!

And there’s still even more:

  • The authors show how to obtain even more efficient database recovery in a setting where the query range values are known to the attacker, using PAC learning. This is a more generous setting than previous work, but it could be realistic in some cases.
  • Finally, they extend this result to prefix and suffix queries, as well as range queries, and show that they can run their attacks on a dataset from the Fraternal Order of Police, obtaining record recovery in a few hundred queries.

In short: this is all really bad for the field of encrypted databases.

So what do we do about this?

I don’t know. Ignore these results? Fake our own deaths and move into a submarine?

In all seriousness: database encryption has been a controversial subject in our field. I wish I could say that there’s been an actual debate, but it’s more that different researchers have fallen into different camps, and nobody has really had the data to make their position in a compelling way. There have actually been some very personal arguments made about it.

The schools of thought are as follows:

The first holds that any kind of database encryption is better than storing records in plaintext and we should stop demanding things be perfect, when the alternative is a world of constant data breaches and sadness.

To me this is a supportable position, given that the current attack model for plaintext databases is something like “copy the database files, or just run a local SELECT * query”, and the threat model for an encrypted database is “gain persistence on the server and run sophisticated statistical attacks.” Most attackers are pretty lazy, so even a weak system is probably better than nothing.

The countervailing school of thought has two points: sometimes the good is much worse than the perfect, particularly if it gives application developers an outsized degree of confidence of the security that their encryption system is going to provide them.

If even the best encryption protocol is only throwing a tiny roadblock in the attacker’s way, why risk this at all? Just let the database community come up with some kind of ROT13 encryption that everyone knows to be crap and stop throwing good research time into a problem that has no good solution.

I don’t really know who is right in this debate. I’m just glad to see we’re getting closer to having it.

 

On Ghost Users and Messaging Backdoors

On Ghost Users and Messaging Backdoors

The past few years have been an amazing time for the deployment of encryption. In ten years, encrypted web connections have gone from a novelty into a requirement for running a modern website. Smartphone manufacturers deployed default storage encryption to billions of phones. End-to-end encrypted messaging and phone calls are now deployed to billions of users.

While this progress is exciting to cryptographers and privacy advocates, not everyone sees it this way. A few countries, like the U.K. and Australia, have passed laws in an attempt to gain access to this data, and at least one U.S. proposal has made it to Congress. The Department of Justice recently added its own branding to the mix, asking tech companies to deploy “responsible encryption“.

What, exactly, is “responsible encryption”? Well, that’s a bit of a problem. Nobody on the government’s side of the debate has really been willing to get very specific about that. In fact, a recent speech by U.S. Deputy Attorney General Rod Rosenstein implored cryptographers to go figure it out.

With this as background, a recent article by GCHQ’s Ian Levy and Crispin Robinson reads like a breath of fresh air. Unlike their American colleagues, the British folks at GCHQ — essentially, the U.K.’s equivalent of NSA — seem eager to engage with the technical community and to put forward serious ideas. Indeed, Levy and Robinson make a concrete proposal in the article above: they offer a new solution designed to surveil both encrypted messaging and phone calls.

In this post I’m going to talk about that proposal as fairly as I can — given that I only have a high-level understanding of the idea. Then I’ll discuss what I think could go wrong.

A brief, illustrated primer on E2E

The GCHQ proposal deals with law-enforcement interception on messaging systems and phone calls. To give some intuition about the proposal, I first need to give a very brief (and ultra-simplified) explanation of how those systems actually work.

The basic idea in any E2E communication systems is that each participant encrypts messages (or audio/video data) directly from one device to the other. This layer of encryption reduces the need to trust your provider’s infrastructure — ranging from telephone lines to servers to undersea cables — which gives added assurance against malicious service providers and hackers.

If you’ll forgive a few silly illustrations, the intuitive result is a picture that looks something like this:

E2E

If we consider the group chat/call setting, the picture changes slightly, but only slightly. Each participant still encrypts data to the other participants directly, bypassing the provider. The actual details (specific algorithms, key choices) vary between different systems. But the concept remains the same.

GroupE2E

The problem with the simplified pictures above is that there’s actually a lot more going on in an E2E system than just encryption.

In practice, one of the most challenging problems in encrypted messaging stems is getting the key you need to actually perform the encryption. This problem, which is generally known as key distribution, is an age-old concern in the field of computer security. There are many ways for it to go wrong.

In the olden days, we used to ask users to manage and exchange their own keys, and then select which users they wanted to encrypt to. This was terrible and everyone hated it. Modern E2E systems have become popular largely because they hide all of this detail from their users. This comes at the cost of some extra provider-operated infrastructure.

In practice, systems like Apple iMessage, WhatsApp and Facebook Messenger actually look more like this:

Identity
Encrypted calling with an “identity system” looking up keys. The Apple represents Apple’s back-end servers.

The Apple at the top of the picture above stands in for Apple’s “identity service”, which is a cluster of servers running in Apple’s various data centers. These servers perform many tasks, but most notably: they act as a directory for looking up the encryption key of the person you’re talking to. If that service misfires and gives you the wrong key, the best ciphers in the world won’t help you. You’ll just be encrypting to the wrong person.

These identity services do more than look up keys. In at least some group messaging systems like WhatsApp and iMessage, they also control the membership of group conversations. In poorly-designed systems, the server can add and remove users from a group conversation at will, even if none of the participants have requested this. It’s as though you’re having a conversation in a very private room — but the door is unlocked and the building manager controls who can come enter and join you.

(A technical note: while these two aspects of the identity system serve different purposes, in practice they’re often closely related. For example, in many systems there is little distinction between “group” and “two-participant” messaging. For example, in systems that support multiple devices connected to a single account, like Apple’s iMessage, every single device attached to your user account is treated as a separate party to the conversation. Provided either party has more than one device on their account [say, an iPhone and an iPad] , you can think of every iMessage conversation as being a group conversation.)

Most E2E systems have basic countermeasures against bad behavior by the identity service. For example, client applications will typically alert you when a new user joins your group chat, or when someone adds a new device to your iMessage account. Similarly, both WhatsApp and Signal expose “safety numbers” that allow participants to verify that they received the right cryptographic keys, which offers a check against dishonest providers.

But these countermeasures are not perfect, and not every service offers them. Which brings me to the GCHQ proposal.

What GCHQ wants

The Lawfare article by Levy and Robinson does not present GCHQ’s proposal in great detail. Fortunately, both authors have spent most of the touring the U.S., giving several public talks about their ideas. I had the privilege of speaking to both of them earlier this summer when they visited Johns Hopkins, so I think I have a rough handle on what they’re thinking.

In its outlines, the idea they propose is extremely simple. The goal is to take advantage of existing the weaknesses in the identity management systems of group chat and calling systems. This would allow law enforcement — with the participation of the service provider — to add a “ghost user” (or in some cases, a “ghost device”) to an existing group chat or calling session. In systems where group membership can be modified by the provider infrastructure, this could mostly be done via changes to the server-side components of the provider’s system.

I say that it could mostly be done server-side, because there’s a wrinkle. Even if you modify the provider infrastructure to add unauthorized users to a conversation, most existing E2E systems do notify users when a new participant (or device) joins a conversation. Generally speaking, having a stranger wander into your conversation is a great way to notify criminals that the game’s afoot or what have you, so you’ll absolutely want to block this warning.

While the GCHQ proposal doesn’t go into great detail, it seems to follow that any workable proposal will require providers to suppress those warning messages at the target’s device. This means the proposal will also require changes to the client application as well as the server-side infrastructure.

(Certain apps like Signal are already somewhat hardened against these changes, because group chat setup is handled in an end-to-end encrypted/authenticated fashion by clients. This prevents the server from inserting new users without the collaboration of at least one group participant. At the moment, however, both WhatsApp and iMessage seem vulnerable to GCHQ’s proposed approach.)

Due to this need for extensive server and client modifications, the GCHQ proposal actually represents a very significant change to the design of messaging systems. It seems likely that the client-side code changes would need to be deployed to all users, since you can’t do targeted software updates just against criminals. (Or rather, if you could rely on such targeted software updates, you would just use that capability instead of the thing that GCHQ is proposing.)

Which brings us to the last piece: how do get providers to go along with all of this?

While optimism and cooperation are nice in principle, it seems unlikely that communication providers are going to to voluntarily insert a powerful eavesdropping capability into their encrypted services, if only because it represents a huge and risky modification. Presumably this means that the UK government will have to compel cooperation. One potential avenue for this is to use Technical Capability Notices from the UK’s Investigatory Powers Act. Those notices mandate that a provider offer real-time decryption for sets of users ranging from 1-10,000 users, and moreover, that providers must design their systems to ensure this such a capability remains available.

And herein lies the problem.

Providers are already closing this loophole

The real problem with the GCHQ proposal is that it targets a weakness in messaging/calling systems that’s already well-known to providers, and moreover, a weakness that providers have been working to close — perhaps because they’re worried that someone just like GCHQ (or probably, much worse) will try to exploit it. By making this proposal, the folks at GCHQ have virtually guaranteed that those providers will move much, much faster on this.

And they have quite a few options at their disposal. Over the past several years researchers have proposed several designs that offer transparency to users regarding which keys they’re obtaining from a provider’s identity service. These systems operate by having the identity service commit to the keys that are associated with individual users, such that it’s very hard for the provider to change a user’s keys (or to add a device) without everyone in the world noticing.

As mentioned above, advanced messengers like Signal have “submerged” the group chat management into the encrypted communications flow, so that the server cannot add new users without the digitally authenticated approval of one of the existing participants. This design, if ported to in more popular services like WhatsApp, would seem to kill the GCHQ proposal dead.

Of course, these solutions highlight the tricky nature of GCHQ’s proposal. Note that in order to take advantage of existing vulnerabilities, GCHQ is going to have to require that providers change their system. And of course, once you’ve opened the door to forcing providers to change their system, why stop with small changes? What stops the UK government from, say, taking things a step farther, and using the force of law to compel providers not to harden their systems against this type of attack?

Which brings us to the real problem with the GCHQ proposal. As far as I can see, there are two likely outcomes. In the first, providers rapidly harden their system — which is good! — and in the process kill off the vulnerabilities that make GCHQ’s proposal viable (which is bad, at least for GCHQ). The more interest that governments express towards the proposal, the more likely this first outcome is. In the second outcome, the UK government, perhaps along with other governments, solve this problem by forcing the providers to keep their systems vulnerable. This second outcome is what I worry about.

More concretely, it’s true that today’s systems include existing flaws that are easy to exploit. But that does not mean we should entomb those flaws in concrete. And once law enforcement begins to rely on them, we will effectively have done so. Over time what seems like a “modest proposal” using current flaws will rapidly become an ossifying influence that holds ancient flaws in place. In the worst-case outcome, we’ll be appointing agencies like GCHQ as the ultimate architect of Apple and Facebook’s communication systems.

That is not a good outcome. In fact, it’s one that will likely slow down progress for years to come.

Let’s talk about PAKE

The first rule of PAKE is: nobody ever wants to talk about PAKE. The second rule of passwordPAKE is that this is a shame, because PAKE — which stands for Password Authenticated Key Exchange — is actually one of the most useful technologies that (almost) never gets used. It should be deployed everywhere, and yet it isn’t.

To understand why this is such a damn shame, let’s start by describing a very real problem.

Imagine I’m operating a server that has to store user passwords. The traditional way to do this is to hash each user password and store the result in a password database. There are many schools of thought on how to handle the hashing process; the most common recommendation these days is to use a memory-hard password hashing function like scrypt or argon2 (with a unique per-password salt), and then store only the hashed result. There are various arguments about which hash function to use, and whether it could help to also use some secret value (called “pepper“), but we’ll ignore these for the moment.

Regardless of the approach you take, all of these solutions have a single achilles heel:

When the user comes back to log into your website, they will still need to send over their (cleartext) password, since this is required in order for the server to do the check. 

This requirement can lead to disaster if your server is ever persistently compromised, or if your developers make a simple mistake. For example, earlier this year Twitter asked all of its (330 million!) users to change their passwords — because it turned out that company had been logging cleartext (unhashed) passwords.

Now, the login problem doesn’t negate the advantage of password hashing in any way. But it does demand a better solution: one where the user’s password never has to go to the server in cleartext. The cryptographic tool that can give this to us is PAKE, and in particular a new protocol called OPAQUE, which I’ll get to at the end of this post.

What’s a PAKE?

A PAKE protocol, first introduced by Bellovin and Merritt, is a special form of cryptographic key exchange protocol. Key exchange (or “key agreement”) protocols are designed to help two parties (call them a client and server) agree on a shared key, using public-key cryptography. The earliest key exchange protocols — like classical Diffie-Hellman — were unauthenticated, which made them vulnerable to man-in-the-middle attacks. The distinguishing feature of PAKE protocols is the client will authenticate herself to the server using a password. For obvious reasons, the password, or a hash of it, is assumed to be already known to the server, which is what allows for checking.

If this was all we required, PAKE protocols would be easy to build. What makes a PAKE truly useful is that it should also provide protection for the client’s password. A stronger version of this guarantee can be stated as follows: after a login attempt (valid, or invalid) both the client and server should learn only whether the client’s password matched the server’s expected value, and no additional information. This is a powerful guarantee. In fact, it’s not dissimilar to what we ask for from a zero knowledge proof.

pakediagram
Ideal representation of a PAKE protocol. The two parties’ inputs also include some randomness, which isn’t shown. An eavesdropper should not learn the strong shared secret key K, which should itself be random and not simply a function of the password.

Of course, the obvious problem with PAKE is that many people don’t want to run a “key exchange” protocol in the first place! They just want to verify that a user knows a password.

The great thing about PAKE is that the simpler “login only” use-case is easy to achieve. If I have a standard PAKE protocol that allows a client and server to agree on a shared key K if (and only if) the client knows the right password, then all we need add is a simple check that both parties have arrived at the same key. (This can be done, for example, by having the parties compute some cryptographic function with it and check the results.) So PAKE is useful even if all you’ve got in mind is password checking.

SRP: The PAKE that Time Forgot

The PAKE concept seems like it provides an obvious security benefit when compared to the naive approach we use to log into servers today. And the techniques are old, in the sense that PAKEs have been known since way back in 1992! Despite this, they’ve seen from almost no adoption. What’s going on?

There are a few obvious reasons for this. The most obvious has to do with the limitations of the web: it’s much easier to put a password form onto a web page than it is to do fancy crypto in the browser. But this explanation isn’t sufficient. Even native applications rarely implement PAKE for their logins. Another potential explanation has to do with patents, though most of these are expired now. To me there are two likely reasons for the ongoing absence of PAKE: (1) there’s a lack of good PAKE implementations in useful languages, which makes it a hassle to use, and (2) cryptographers are bad at communicating the value of their work, so most people don’t know PAKE is even an option.

Even though I said PAKE isn’t deployed, there are some exceptions to the rule.

One of the remarkable ones is a 1998 protocol designed by Tom Wu [correction: not Tim Wu] and called “SRP”. Short for “Secure Remote Password“, this is a simple three-round PAKE with a few elegant features that were not found in the earliest works. Moreover, SRP has the distinction of being (as far as I know) the most widely-deployed PAKE protocol in the world. I cite two pieces of evidence for this claim:

  1. SRP has been standardized as a TLS ciphersuite, and is actually implemented in libraries like OpenSSL, even though nobody seems to use it much.
  2. Apple uses SRP extensively in their iCloud Key Vault.

This second fact by itself could make SRP one of the most widely used cryptographic protocols in the world, so vast is the number of devices that Apple ships. So this is nothing to sneer at.

Industry adoption of SRP is nice, but also kind of a bummer: mainly because while any PAKE adoption is cool, SRP itself isn’t the best PAKE we can deploy. I was planning to go into the weeds about why I feel so strongly about SRP, but it got longwinded and it distracted from the really nice protocol I actually want to talk about further below. If you’re still interested, I moved the discussion onto this page.

In lieu of those details, let me give a quick and dirty TL;DR on SRP:

  1. SRP does some stuff “right”. For one thing, unlike early PAKEs it does not require you to store a raw password on the server (or, equivalently, a hash that could be used by a malicious client in place of the password). Instead, the server stores a “verifier” which is a one-way function of the password hash. This means a leak of the password database does not (immediately) allow the attacker to impersonate the user — unless they conduct further expensive dictionary attacks. (The technical name for this is “asymmetric” PAKE.)
  2. Even better, the current version of SRP (v4 v6a) isn’t obviously broken!
  3. However (and with no offense to the designers) the SRP protocol design is completely bonkers, and earlier versions have been broken several times — which is why we’re now at revision 6a. Plus the “security proof” in the original research paper doesn’t really prove anything meaningful.
  4. SRP currently relies on integer (finite field) arithmetic, and for various reasons (see point 3 above) the construction is not obviously transferable to the elliptic curve setting. This requires more bandwidth and computation, and thus SRP can’t take advantage of the many efficiency improvements we’ve developed in settings like Curve25519.
  5. SRP is vulnerable to pre-computation attacks, due to the fact that it hands over the user’s “salt” to any attacker who can start an SRP session. This means I can ask a server for your salt, and build a dictionary of potential password hashes even before the server is compromised.
  6. Despite all these drawbacks, SRP is simple — and actually ships with working code. Plus there’s working code in OpenSSL that even integrates with TLS, which makes it relatively easy to adopt.

Out of all these points, the final one is almost certainly responsible for the (relatively) high degree of commercial success that SRP has seen when compared to other PAKE protocols. It’s not ideal, but it’s real. This is something for cryptographers to keep in mind.

OPAQUE: The PAKE of a new generation

When I started thinking about PAKEs a few months ago, I couldn’t help but notice that most of the existing work was kind of crummy. It either had weird problems like SRP, or it required the user to store the password (or an effective password) on the server, or it revealed the salt to an attacker — allowing pre-computation attacks.

Then earlier this year, Jarecki, Krawczyk and Xu proposed a new protocol called OPAQUE. Opaque has a number of extremely nice advantages:

  1. It can be implemented in any setting where Diffie-Hellman and discrete log (type) problems are hard. This means that, unlike SRP, it can be easily instantiated using efficient elliptic curves.
  2. Even better: OPAQUE does not reveal the salt to the attacker. It solves this problem by using an efficient “oblivious PRF” to combine the salt with the password, in a way that ensures the client does not learn the salt and the server does not learn the password.
  3. OPAQUE works with any password hashing function. Even better, since all the hashing work is done on the client, OPAQUE can actually take load off the server, freeing an online service up to use much strong security settings — for example, configuring scrypt with large RAM parameters.
  4. In terms of number of messages and exponentiations, OPAQUE is not much different from SRP. But since it can be implemented in more efficient settings, it’s likely to be a lot more efficient.
  5. Unlike SRP, OPAQUE has a reasonable security proof (in a very strong model).

There’s even an Internet Draft proposal for OPAQUE, which you can read here. Unfortunately, at this point I’m not aware of any production quality implementations of the code (if you know of one, please link to it in the comments and I’ll update). (Update: There are several potential implementations listed in the comments — I haven’t looked closely enough to endorse any, but this is great!) But that should soon change.

The full OPAQUE protocol is given a little bit further below. In the rest of this section I’m going to go into the weeds on how OPAQUE works.

Problem 1: Keeping the salt secret. As I mentioned above, the main problem with earlier PAKEs is the need to transmit the salt from a server to a (so far unauthenticated) client. This enables an attacker to run pre-computation attacks, where they can build an offline dictionary based on this salt.

The challenge here is that the salt is typically fed into a hash function (like scrypt) along with the password. Intuitively someone has to compute that function. If it’s the server, then the server needs to see the password — which defeats the whole purpose. If it’s the client, then the client needs the salt.

In theory one could get around this problem by computing the password hashing function using secure two-party computation (2PC). In practice, solutions like this are almost certainly not going to be efficient — most notably because password hashing functions are designed to be complex and time consuming, which will basically explode the complexity of any 2PC system.

OPAQUE gets around this with the following clever trick. They leave the password hash on the client’s side, but they don’t feed it the stored salt. Instead, they use a special two-party protocol called an oblivious PRF to calculate a second salt (call it salt2) so that the client can use salt2 in the hash function — but does not learn the original salt.

It works like this:

The server stores "salt", and the client has the password.

salt2 = PRF(salt, password) // This is calculated between the 
                            // client and server, using an oblivious
                            // protocol where the client never learns
                            // salt, and the server never learns
                            // the password. The client obtains salt2

K      = PasswordHash(salt2, password) // This is done on the client

The actual implementation of the oblivious PRF can be done using a couple of group elements and exponentiations. Even better, if the client enters the wrong password into that protocol, she obtains a completely bogus “salt2” value that reveals nothing about the real salt value.

Problem 2: Proving that the client got the right key K. Of course, at this point, the client has derived a key K, but the server has no idea what it is. Nor does the server know whether it’s the right key.

The solution OPAQUE uses based an old idea due to Gentry, Mackenzie and Ramzan. When the user first registers with the server, she generates a strong public and private key for a secure agreement protocol (like HMQV), and encrypts the resulting private key under K, along with the server’s public key. The resulting authenticated ciphertext (and the public key) is stored in the password database.

C = Encrypt(K, client secret key | server’s public key)

opaqueprotocol
Full OPAQUE protocol, excerpted from the paper.

When the client wishes to authenticate using the OPAQUE protocol, the server sends it the stored ciphertext C. If the client entered the right password into the first phase, she can derive K, and now decrypt this ciphertext. Otherwise it’s useless. Using the embedded secret key, she can now run a standard authenticated key agreement protocol to complete the handshake. (The server verifies the clients’ inputs against its copy of the client’s public key, and the client does similarly.)

Putting it all together. All of these different steps can be merged together into a single protocol that has the same number of rounds as SRP. Leaving aside the key verification steps, it looks like the protocol above. Basically, just two messages: one from the client and one returned to the server.

The final aspect of the OPAQUE work is that it includes a strong security proof that shows the resulting protocol can be proven secure under the 1-more discrete logarithm assumption in the random oracle model, which is a (well, relatively) standard assumption that appears to hold in the settings we work with.

In conclusion

So in summary, we have this neat technology that could make the process of using passwords much easier, and could allow us to do it in a much more efficient way — with larger hashing parameters, and more work done by the client? Why isn’t this everywhere?

Maybe in the next few years it will be.

 

 

 

 

Why I’m done with Chrome

This blog is mainly reserved for cryptography, and I try to avoid filling it with random 512px-Google_Chrome_icon_(September_2014).svg“someone is wrong on the Internet” posts. After all, that’s what Twitter is for! But from time to time something bothers me enough that I have to make an exception. Today I wanted to write specifically about Google Chrome, how much I’ve loved it in the past, and why — due to Chrome’s new user-unfriendly forced login policy — I won’t be using it going forward.

A brief history of Chrome

When Google launched Chrome ten years ago, it seemed like one of those rare cases where everyone wins. In 2008, the browser market was dominated by Microsoft, a company with an ugly history of using browser dominance to crush their competitors. Worse, Microsoft was making noises about getting into the search business. This posed an existential threat to Google’s internet properties.

In this setting, Chrome was a beautiful solution. Even if the browser never produced a scrap of revenue for Google, it served its purpose just by keeping the Internet open to Google’s other products. As a benefit, the Internet community would receive a terrific open source browser with the best development team money could buy. This might be kind of sad for Mozilla (who have paid a high price due to Chrome) but overall it would be a good thing for Internet standards.

For many years this is exactly how things played out. Sure, Google offered an optional “sign in” feature for Chrome, which presumably vacuumed up your browsing data and shipped it off to Google, but that was an option. An option you could easily ignore. If you didn’t take advantage of this option, Google’s privacy policy was clear: your data would stay on your computer where it belonged.

What changed?

A few weeks ago Google shipped an update to Chrome that fundamentally changes the sign-in experience. From now on, every time you log into a Google property (for example, Gmail), Chrome will automatically sign the browser into your Google account for you. It’ll do this without asking, or even explicitly notifying you. (However, and this is important: Google developers claim this will not actually start synchronizing your data to Google — yet. See further below.)

Your sole warning — in the event that you’re looking for it — is that your Google profile picture will appear in the upper-right hand corner of the browser window. I noticed mine the other day:

foo

The change hasn’t gone entirely unnoticed: it received some vigorous discussion on sites like Hacker News. But the mainstream tech press seems to have ignored it completely. This is unfortunate — and I hope it changes — because this update has huge implications for Google and the future of Chrome.

In the rest of this post, I’m going to talk about why this matters. From my perspective, this comes down to basically four points:

  1. Nobody on the Chrome development team can provide a clear rationale for why this change was necessary, and the explanations they’ve given don’t make any sense.
  2. This change has enormous implications for user privacy and trust, and Google seems unable to grapple with this.
  3. The change makes a hash out of Google’s own privacy policies for Chrome.
  4. Google needs to stop treating customer trust like it’s a renewable resource, because they’re screwing up badly.

I warn you that this will get a bit ranty. Please read on anyway.

Google’s stated rationale makes no sense

The new feature that triggers this auto-login behavior is called “Identity consistency between browser and cookie jar” (HN). After conversations with two separate Chrome developers on Twitter (who will remain nameless — mostly because I don’t want them to hate me), I was given the following rationale for the change:

IMG_3331

To paraphrase this explanation: if you’re in a situation where you’ve already signed into Chrome and your friend shares your computer, then you can wind up accidentally having your friend’s Google cookies get uploaded into your account. This seems bad, and sure, we want to avoid that.

But note something critical about this scenario. In order for this problem to apply to you, you already have to be signed into Chrome. There is absolutely nothing in this problem description that seems to affect users who chose not to sign into the browser in the first place.

So if signed-in users are your problem, why would you make a change that forces unsignedin users to become signed-in? I could waste a lot more ink wondering about the mismatch between the stated “problem” and the “fix”, but I won’t bother: because nobody on the public-facing side of the Chrome team has been able to offer an explanation that squares this circle.

And this matters, because “sync” or not…

The change has serious implications for privacy and trust

The Chrome team has offered a single defense of the change. They point out that just because your browser is “signed in” does not mean it’s uploading your data to Google’s servers. Specifically:

While Chrome will now log into your Google account without your consent (following a Gmail login), Chrome will not activate the “sync” feature that sends your data to Google. That requires an additional consent step. So in theory your data should remain local.

This is my paraphrase. But I think it’s fair to characterize the general stance of the Chrome developers I spoke with as: without this “sync” feature, there’s nothing wrong with the change they’ve made, and everything is just fine.

This is nuts, for several reasons.

User consent matters. For ten years I’ve been asked a single question by the Chrome browser: “Do you want to log in with your Google account?” And for ten years I’ve said no thanks. Chrome still asks me that question — it’s just that now it doesn’t honor my decision.

The Chrome developers want me to believe that this is fine, since (phew!) I’m still protected by one additional consent guardrail. The problem here is obvious:

If you didn’t respect my lack of consent on the biggest user-facing privacy option in Chrome (and  didn’t even notify me that you had stopped respecting it!) why should I trust any other consent option you give me? What stops you from changing your mind on that option in a few months, when we’ve all stopped paying attention?

The fact of the matter is that I’d never even heard of Chrome’s “sync” option — for the simple reason that up until September 2018, I had never logged into Chrome. Now I’m forced to learn these new terms, and hope that the Chrome team keeps promises to keep all of my data local as the barriers between “signed in” and “not signed in” are gradually eroded away.

The Chrome sync UI is a dark pattern. Now that I’m forced to log into Chrome, I’m faced with a brand new menu I’ve never seen before. It looks like this:

Thing

 

Does that big blue button indicate that I’m already synchronizing my data to Google? That’s scary! Wait, maybe it’s an invitation to synchronize! If so, what happens to my data if I click it by accident? (I won’t give it the answer away, you should go find out. Just make sure you don’t accidentally upload all your data in the process. It can happen quickly.)

In short, Google has transformed the question of consenting to data upload from something affirmative that I actually had to put effort into — entering my Google credentials and signing into Chrome — into something I can now do with a single accidental click. This is a dark pattern. Whether intentional or not, it has the effect of making it easy for people to activate sync without knowing it, or to think they’re already syncing and thus there’s no additional cost to increasing Google’s access to their data.

Don’t take my word for it. It even gives (former) Google people the creeps.

Big brother doesn’t need to actually watch you. We tell things to our web browsers that we wouldn’t tell our best friends. We do this with some vague understanding that yes, the Internet spies on us. But we also believe that this spying is weak and probabilistic. It’s not like someone’s standing over our shoulder checking our driver’s license with each click.

What happens if you take that belief away? There are numerous studies indicating that even the perception of surveillance can significantly greatly magnify the degree of self-censorship users force on themselves. Will user feel comfortable browsing for information on sensitive mental health conditions — if their real name and picture are always loaded into the corner of their browser? The Chrome development team says “yes”. I think they’re wrong.

For all we know, the new approach has privacy implications even if sync is off. The Chrome developers claim that with “sync” off, a Chrome has no privacy implications. This might be true. But when pressed on the actual details, nobody seems quite sure.

For example, if I have my browser logged out, then I log in and turn on “sync”, does all my past (logged-out) data get pushed to Google? What happens if I’m forced to be logged in, and then subsequently turn on “sync”? Nobody can quite tell me if the data uploaded in these conditions is the same. These differences could really matter.

The changes make hash of the Chrome privacy policy

The Chrome privacy policy is a remarkably simple document. Unlike most privacy policies, it was clearly written as a promise to Chrome’s users — rather than as the usual lawyer CYA. Functionally, it describes two browsing modes: “Basic browser mode” and “signed-in mode”. These modes have very different properties. Read for yourself:

Untitled 2Untitled 3

In “basic browser mode”, your data is stored locally. In “signed-in” mode, your data gets shipped to Google’s servers. This is easy to understand. If you want privacy, don’t sign in. But what happens if your browser decides to switch you from one mode to the other, all on its own?

Technically, the privacy policy is still accurate. If you’re in basic browsing mode, your data is still stored locally. The problem is that you no longer get to decide which mode you’re in. This makes a mockery out of whatever intentions the original drafters had. Maybe Google will update the document to reflect the new “sync” distinction that the Chrome developers have shared with me. We’ll see.

Update: After I tweeted about my concerns, I received a DM on Sunday from two different Chrome developers, each telling me the good news: Google is updating their privacy policy to reflect the new operation of Chrome. I think that’s, um, good news. But I also can’t help but note that updating a privacy policy on a weekend is an awful lot of trouble to go to for a change that… apparently doesn’t even solve a problem for signed-out users.

Trust is not a renewable resource

For a company that sustains itself by collecting massive amounts of user data, Google has  managed to avoid the negative privacy connotations we associate with, say, Facebook. This isn’t because Google collects less data, it’s just that Google has consistently been more circumspect and responsible with it.

Where Facebook will routinely change privacy settings and apologize later, Google has upheld clear privacy policies that it doesn’t routinely change. Sure, when it collects, it collects gobs of data, but in the cases where Google explicitly makes user security and privacy promises — it tends to keep them. This seems to be changing.

Google’s reputation is hard-earned, and it can be easily lost. Changes like this burn a lot of trust with users. If the change is solving an absolutely critical problem for users , then maybe a loss of trust is worth it. I wish Google could convince me that was the case.

Conclusion

This post has gone on more than long enough, but before I finish I want to address two common counterarguments I’ve heard from people I generally respect in this area.

One argument is that Google already spies on you via cookies and its pervasive advertising network and partnerships, so what’s the big deal if they force your browser into a logged-in state? One individual I respect described the Chrome change as “making you wear two name tags instead of one”. I think this objection is silly both on moral grounds — just because you’re violating my privacy doesn’t make it ok to add a massive new violation — but also because it’s objectively silly. Google has spent millions of dollars adding additional tracking features to both Chrome and Android. They aren’t doing this for fun; they’re doing this because it clearly produces data they want.

The other counterargument (if you want to call it that) goes like this: I’m a n00b for using Google products at all, and of course they were always going to do this. The extreme version holds that I ought to be using lynx+Tor and DJB’s custom search engine, and if I’m not I pretty much deserve what’s coming to me.

I reject this argument. I think It’s entirely possible for a company like Google to make good, usable open source software that doesn’t massively violate user privacy. For ten years I believe Google Chrome did just this.

Why they’ve decided to change, I don’t know. It makes me sad.

 

 

Friday Dachshund Blogging

Friday Dachshund Blogging

For over a year this blog has failed to deliver on an essential promise — that there would someday be pictures of dachshunds. Today we deliver.

This is Callie (short for Calliope) working her way through a bit of summer crypto reading:

FBA1BABD-C60E-4AD9-A150-5D771BCE8FA3

But sometimes that’s exhausting and you’ve gotta take a break.

IMG_2397

A visit from a strange metallic dachshund:

IMG_2124

Summer:

IMG_2806

And in memoriam, Zoe and Sophie, who helped me start this blog.

 

Wonk post: chosen ciphertext security in public-key encryption (Part 2)

Wonk post: chosen ciphertext security in public-key encryption (Part 2)

This continues the post from Part 1. Note that this is a work in progress, and may have some bugs in it 🙂 I’ll try to patch them up as I go along.

In the previous post I discussed the problem of building CCA-secure public key encryption. Here’s a quick summary of what we discussed in the first part:

  • We covered the definition of CCA2 security.
  • We described how you can easily achieve this notion in the symmetric encryption setting using a CPA-secure encryption scheme, plus a secure MAC.
  • We talked about why this same approach doesn’t work for standard public-key encryption.

In this post I’m going to discuss a few different techniques that actually do provide CCA security for public key encryption. We’ll be covering these in no particular order.

A quick note on security proofs. There are obviously a lot of different ways you could try to hack together a CCA2 secure scheme out of different components. Some of those might be secure, or they might not be. In general, the key difference between a “secure” and “maybe secure” scheme is the fact that we can construct some kind of security proof for it.

The phrase “some kind” will turn out to be an important caveat, because these proofs might require a modest amount of cheating.

The bad and the ugly

Before we get to the constructive details, it’s worth talking a bit about some ideas that don’t work to achieve CCA security. The most obvious place to start is with some of the early RSA padding schemes, particularly the PKCS#1v1.5 padding standard.

PKCS#1 padding was developed in the 1980s, when it was obvious that public key encryption was going to become widely deployed. It was intended as a pre-processing stage for messages that were going to be encrypted using an RSA public key.

This padding scheme had two features. First, it added randomness to the message prior to encrypting it. This was designed to defeat the simple ciphertext guessing attacks that come from deterministic encryption schemes like RSA. It can be easily shown that randomized encryption is absolutely necessary for any IND-CPA (and implicitly, IND-CCA) secure public key encryption scheme. Second, the padding added some “check” bytes that were intended to help detect mangled ciphertexts after decryption; this was designed (presumably) to shore the scheme up against invalid decryption attempts.

PKCS#1v1.5 is still widely used in protocols, including all versions of TLS prior to TLS 1.3. The diagram below shows what the padding scheme looks like when used in TLS with a 2048-bit RSA key. The section labeled “48 bytes PMS” (pre-master secret) in this example represents the plaintext being encrypted. The 205 “non-zero padding” consists of purely random bytes that exclude the byte “0”, because that value is reserved to indicate the end of the padding section and the beginning of the plaintext.

pkcs1PMS

After using the RSA secret key to recover the padded message, the decryptor is supposed to parse the message and verify that the first two bytes (“00 02”) and the boundary “00” byte are all correct and in not violating any rules. The decryptor may optionally conduct other checks like verifying the length and structure of the plaintext, in case that’s known in advance.

One of the most immediate observations about PKCS#1v1.5 is that the designers kind of intuitively understood that chosen ciphertext attacks were a thing. They clearly added some small checks to make sure that it would be hard for an attacker to modify a given ciphertext (e.g., by multiplying it by a chosen value). It’s also obvious that these checks aren’t very strong. In the standardized version of the padding scheme, there are essentially three bytes to check — and one of them (the “00” byte after the padding) can “float” at a large number of different positions, depending on how much padding and plaintext there is in the message.

The use of a weak integrity check leads to a powerful CCA2 attack on the encryption scheme that was first discovered by Daniel Bleichenbacher. The attack is powerful due to the fact that it actually leverages the padding check as a way to learn information about the plaintext. That is: the attacker “mauls” a ciphertext and sends it to be decrypted, and relies on the fact that the decryptor will simply perform the decryption checks they’re supposed to perform — and output a noticable error if they fail. Given only this one bit of information per decryption, the attack can gradually recover the full plaintext of a specific ciphertext by (a) multiplying it with some value, (b) sending the result to be decrypted, (c) recording the success/failure result, (d) adaptively picking a new value and repeating step (a) many thousands or millions of times.

The PKCS#1v1.5 padding scheme is mainly valuable to us today because it provides an excellent warning to cryptographic engineers, who would otherwise continue to follow the “you can just hack together something that looks safe” school of building protocols. Bleichenbacher-style attacks have largely scared the crypto community straight. Rather than continuing to use this approach, the crypto community has (mostly) moved towards techniques that at least offer some semblance of provable security.

That’s what we’ll cover in just a moment.

A few quick notes on achieving CCA2-secure public key encryption

Before we get to a laundry list of specific techniques and schemes, it’s worth asking what types of design features we might be looking for in a CCA2 public key encryption scheme. Historically there have been two common requirements:

  • It would be super convenient if we could start with an existing encryption scheme, like RSA or Elgamal encryption, and generically tweak (or “compile”) that scheme into a CCA2-secure scheme. (Re-usable generic techniques are particularly useful in the event that someone comes up with new underlying encryption schemes, like post-quantum secure ones.)
  • The resulting scheme should be pretty efficient. That rules out most of the early theoretical techniques that use huge zero knowledge proofs (as cool as they are).

Before we get to the details, I also want to repeat the intuitive description of the CCA2 security game, which I gave in the previous post. The game (or “experiment”) works like this:

  1. I generate an encryption keypair for a public-key scheme and give you the public key.
  2. You can send me (sequentially and adaptively) many ciphertexts, which I will decrypt with my secret key. I’ll give you the result of each decryption.
  3. Eventually you’ll send me a pair of messages (of equal length) M_0, M_1 and I’ll pick a bit b at random, and return to you the encryption of M_b, which I will denote as C^* \leftarrow {\sf Encrypt}(pk, M_b).
  4. You’ll repeat step (2), sending me ciphertexts to decrypt. If you send me C^* I’ll reject your attempt. But I’ll decrypt any other ciphertext you send me, even if it’s only slightly different from C^*.
  5. You (the attacker) will output your guess b'. They “win” the game if b'=b.
  6. We say a scheme is IND-CCA2 secure if the attacker wins with probability “not much greater” than 1/2 (which is the best an attacker can do if they just guess randomly.)

A quick review of this definition shows that we need a CCA2-encryption scheme to provide at least two major features.

First off, it should be obvious that the scheme must not leak information about the secret key, even when I’m using it to decrypt arbitrary chosen ciphertexts of your choice. There are obvious examples of schemes that fail to meet this requirement: the most famous is the (textbook) Rabin cryptosystem — where the attacker’s ability to obtain the decryption of a single chosen ciphertext can leak the entire secret key.

More subtly, it seems obvious that CCA2 security is related to non-malleabilityHere’s why: suppose I receive a challenge ciphertext C^* at step (3). It must be the case that I cannot easily “maul” that ciphertext into a new ciphertext C' that contains a closely related plaintext (and that the challenger will be able and willing to meaingfully decrypt). It’s easy to see that if I could get away with this, by the rules of the game I could probably win at step (4), simply by sending C' in to be decrypted, getting the result, and seeing whether it’s more closely related to M_0 or M_1. (This is, in fact, a very weak explanation of what the Bleichenbacher attack does.)

It turns out that an even stronger property that helps achieve both of these conditions is something called plaintext awareness. There are various subtly-different mathematical formulations of this idea, but here I’ll try to give only the English-language intuition:

If the attacker is able to submit a (valid) ciphertext to be decrypted, it must be the case that she already knows the plaintext of that message.

This guarantee is very powerful, because it helps us to be sure that the decryption process doesn’t give the attacker any new information that she doesn’t already have. She can submit any messages she wants (including mauling the challenge ciphertext C^*) but if this plaintext-awareness property holds in the strongest sense, those decryptions won’t tell her anything she doesn’t already know.

Of course, just because your scheme appears to satisfy the above conditions does not mean it’s secure. Both rules above are heuristics: that is, they’re necessary conditions to prevent attacks, but they may or may not be sufficient. To really trust a scheme (in the cryptographic sense) we should be able to offer a proof (under some assumptions) that these guarantees hold. We’ll address that a bit as we go forward.

Technique 1: Optimal Asymmetric Encryption Padding

One of the earlier practical CCA2 transforms was developed by Bellare and Rogaway as a direct replacement for the PKCS#1v1.5 padding scheme in RSA encryption. The scheme they developed — called Optimal Asymmetric Encryption Padding — represents a “drop-in” replacement for the earlier, broken v1.5 padding scheme. It also features a security proof. (Mostly. We’ll come back to this last point.)

(Confusingly, OAEP was adopted into the PKCS#1 standards as of version 2.0, so sometimes you’ll see it referred to as PKCS#1v2.0-OAEP.)

OAEP’s most obvious advance over the previous state of the art is the addition of not one, but two separate hash functions G() and H() that don’t exist in the v1.5 predecessor. (These are sometimes referred to as “mask generation functions”, which is just a fancy way of saying they’re hash functions with outputs of a custom, chosen size. Such functions can be easily built from existing hash functions like SHA256.)

Expressed graphically, this is what OAEP it looks like:

OAEP padding function (courtesy Ozga at Wikipedia). The message is m and r is a string of random bits. The “000” represents a “check string” consisting of a string of k1 “0” bits. The lengths k0, k1 are chosen by the scheme, and the length of the overall input should be the largest bit (or byte) string that can fit inside of an RSA modulus (e.g., 1024 bits). Some 0 bits/bytes may have to be pre-pended to the result if the padded result smaller than the modulus. 

If you’ve ever seen the DES cipher, this structure should look familiar to you. Basically OAEP is a two-round (unkeyed) Feistel network that uses a pair of hash functions to implement the round functions. There are a few key observations you can make right off the bat:

  • Just looking at the diagram above, you can see that it’s very easy to compute this padding function forward (going from a plaintext m and some random padding r to a padded message) and backwards — that is, it’s an easily-invertible permutation. The key to this feature is the Feistel network structure.
  • Upon decryption, a decryptor can invert the padding of a given message and verify that the “check string” (the string of k1 “0” bits) is correctly structured. If this string is not structured properly, the decryptor can simply output an error. This comprises the primary decryption check.
  • Assuming some (strong) properties of the hash functions, it intuitively seems that the OAEP transform is designed to create a kind of “avalanche effect” where even a small modification of a padded message will result in a very different unpadded result when the transform is inverted. In practice any such modification should “trash” the check string with overwhelming probability.

From an intuitive point of view, these last two properties are what makes OAEP secure against chosen-ciphertext attacks. The idea here is that, due to the random properties of the hash function, it should be hard for the attacker to construct a valid ciphertext (one that has a correct check string) if she does not already know the plaintext that goes into the transform. This should hold even if the attacker already has some known valid ciphertext (like C^*) that she wishes to maul.

More specifically related to mauling: if I send an RSA-OAEP ciphertext C^* that encrypts a specific message m, the attacker should not be able to easily maul that ciphertext into a different ciphertext C' that will still pass the decryption checks. This is due to two facts: (1) because RSA is a (trapdoor) permutation, any change to C^* will implicitly change the padded message your recover after inverting the RSA function. And (2) sending this altered padded message backwards through the OAEP transform should, with overwhelming probability, trash the check string (and the message m). The result is that the adversary can’t maul someone else’s ciphertext.

This all assumes some very strong assumptions about the hash functions, which we’ll discuss below.

The OAEP proof details (at the most ridiculously superficial level)

Proving OAEP secure requires two basic techniques. Both fundamentally rely on the notion that the functions G() and H() are random oraclesThis is important for two very different reasons.

First: assuming a function is a “random oracle” means that we’re assuming it to have the same behavior as a random function. This is an awesome property for a hash function to have! (Note: real hash functions don’t have it. This means that hypothetically they could have very ‘non-random’ behavior that would make RSA-OAEP insecure. In practice this has not yet been a practical concern for real OAEP implementations, but it’s worth keeping in mind.

It’s easy to see that if the hash functions G() and H() were random functions, it would give OAEP some very powerful properties. Remember, one of the main intuitive goals of the OAEP scheme is to prevent attackers from successfully getting you to decrypt an improperly-constructed (e.g., mauled) ciphertext. If both hash functions are truly random, then this implies that any invalid ciphertext will almost certainly fail decryption, because the padding check will fail.

At a much deeper level, the use of random oracles in RSA’s security proof gives the security reduction a great deal of “extra power” to handle things like decrypt chosen ciphertexts. This is due to the fact that, in a random oracle proof, the proof reduction is allowed to both “see” every value hashed through those hash functions, and also to “program” the functions so that they will produce specific outputs. This would not be possible if G() and H() were implemented using real hash functions, and so the entire security proof would break down.

These properties provide a tool in the security proof to enable decryption even when the secret key is unknown. In a traditional proof of the RSA-OAEP scheme, the idea is to show that an attacker who breaks the encryption (in the IND-CCA2 sense) can be used to construct a second attacker who solves the RSA problem. This is done by taking some random values (N, e, C) where N, e is an RSA public key of unknown factorization and “programming” the random oracles such that C^* = C. The intuitive idea is that an attacker who is able to learn something about the underlying message must query the functions G() and H() on correct inputs that, ultimately will allow the security reduction to obtain the RSA inverse of C^* even when the reduction does not know the RSA secret key, That is, such an attacker will allow us to find an integer M' such that M'^e = C.

(There turned out to be some issues in the original OAEP proof that make it not quite work for arbitrary trapdoor permutations. Shoup fixed these by providing a new padding padding scheme called OAEP+, but the original OAEP had since gone into heavy usage within standards! It turns out that RSA-OAEP does work, however, for RSA with public exponents 3 and other exponents, though proving this required some ugly band-aids. This whole story is part of a cautionary tail about provably security, which Koblitz discusses here.)

Technique 2: The Fujisaki-Okamoto Transform

One limitation of OAEP (and OAEP+) padding is that it requires a trapdoor permutation in order to work. This applies nicely to RSA encryption, but does not necessarily work with every existing public-key encryption scheme. This motivates the need for other CCA transforms that work with arbitrary existing (non-CCA) encryption schemes.

One of the nicest generic techniques for building CCA2-secure public-key encryption is due to Eiichiro Fujisaki and Tatsuaki Okamoto. The idea of this transform is to begin with a scheme that already meets the definition of IND-CPA security — that is, it is semantically secure, but not against chosen ciphertext attacks. (For this description, we’ll also require that this scheme has a large [exponentially-sized] message space and some specific properties related to randomness.) The beauty of the “Fujisaki-Okamoto transform” (henceforth: F-O) is that, like OAEP before it, given a working public-key encryption scheme, it requires only the addition of some hash functions, and can be proven secure in the random oracle model.

Let’s imagine that we have an IND-CPA encryption public-key encryption algorithm that consists of the algorithms {\sf KeyGen}, {\sf Encrypt}, {\sf Decrypt}. We’ll also make use of two independent hash functions H_1, H_2.

A key observation here is that in every IND-CPA (semantically secure) public key encryption scheme, the {\sf Encrypt} algorithm is randomized. This actually has to be the case, due to the definition of IND-CPA. (See here for a discussion of why that is.) Put more explicitly, what this means is that the encryption algorithm must have acccess to some set of random bits that will be used to produce the ciphertext.

The main trick that the F-O transform uses is to de-randomize this public-key encryption algorithm. Instead of using real random bits to encrypt, it will instead use the output of the hash function H_1 to produce the random bits that will be used for encryption. This turns a randomized encryption into a deterministic one. (This, of course, requires that both the input and the internals of H_1 are capable of producing bits that “look” random.)

Let’s get to the nuts and bolts. The F-O transform does not change the key generation algorithm of the original encryption scheme at all, except to specify the hash functions H_1, H_2. The main changes come in the new encryption and decryption algorithms. I’m going to present one variant of the transform, though there are others. This one works as follows.

To encrypt a message M, which we’ll express as some fixed-length string of bits:

  1. Instead of encrypting the actual message M, we instead sample a random message R from the message space of the original CPA-secure scheme.
  2. We hash the random message R together with the original message M using that first hash function H_1. The output of this function will give us a ‘random’ bitstring. Let’s denote this as: r \leftarrow H_1(R \| M).
  3. Next, we’ll encrypt the new random message R using the original (CPA-secure) encryption scheme’s {\sf Encrypt} algorithm, but critically: we will use the bits r as the randomness for that encryption. The result of this process will give the first part of the ciphertext: C_1 \leftarrow {\sf Encrypt}(pk, R; r). Note that here r just refers to the randomness for the encryption algorithm, not an actual message being encrypted.
  4. Finally, we derive a “key” for encrypting the real message we want to send. We can compute this as K \leftarrow H_2(R).
  5. We now encrypt the original message M we want to send using some secure encryption scheme, for example the simple one-time pad: C_2 \leftarrow M \oplus K.
  6. We output the “ciphertext” C = (C_1, C_2).

To decrypt C = (C_1, C_2), we would perform the following steps:

  1. First, use the original public-key encryption scheme’s secret key to decrypt the ciphertext C_1, which (if all is well) should give us R' \leftarrow {\sf Decrypt}(sk, C_1).
  2. Now use knowledge of R' to recover the key K' \leftarrow H_2(R') and thus the message M' which we can obtain as M' \leftarrow C_2 \oplus K'.
  3. Now check that both R', M' are valid by re-computing the randomness r' \leftarrow H_1(R' \| M') and verifying the condition C_1 = {\sf Encrypt}(pk, R'; r'). If this final check fails, simply output a decryption error.

Phew. So what the heck is going on here?

Let’s tackle this scheme from a practical perspective. Earlier in this post, we said that to achieve IND-CCA2 security, a scheme must have two features. First, it must be plaintext aware, which means that in order to construct a valid ciphertext (that passes all decryption checks) the attacker should already know the plaintext.

Does F-O have this property? Well, intuitively we would hope that the answer is “yes”. Note for some valid F-O ciphertext C = (C_1, C_2) the decrypted plaintext is implicitly defined as M' \leftarrow C_2 \oplus H_2(R'). So what we really want to prove is that in order to construct a valid ciphertext the attacker must already know R' and M' prior to sending the message for decryption.

This guarantee (with high probability) comes from the structure of C_1. In order for the ciphertext to be considered valid by the decryptor, it must be the case that C_1 satisfies the check C_1 = {\sf Encrypt}(pk, M'; r' = H_1(R' \| M')). The idea of this proof is that it should be hard for an attacker to construct such a C_1 unless she has previously called the hash function H_1 on input (R', M'). If she has called the hash function to produce this portion of the ciphertext, then she already knows those values and the decryption oracle provides her with no additional information she didn’t already have. (Alternatively, if she did not call the hash function, then her probability of making a valid C_1 should be extremely low.)

Of course, this is only one strategy available to the attacker. She could also maul an existing ciphertext like C^* = (C_1^*, C_2^*). In this case her strategy is twofold: she can tamper with the first portion of the ciphertext and/or she can tamper with the second. But it’s easy to see that this will tend to break some portion of the decryption checks:

  1. If she tampers with any bit of C_2^*, she will change the recovered message into a new value that we can call M''. However this will in turn (with overwhelming probability) cause the decryptor to recover very different random coins r'' \leftarrow H_1(R' \| M'') than were used in the original construction of C_1^*, and thus decryption check on that piece will probably fail.
  2. If she tampers with any bit of C_1^*, the decryption check $latex C_1^* = {\sf Encrypt}(pk, M’; r’) ought not to pass, and decryption will just produce an error.
  3. She might try to tamper with both parts of the ciphertext, of course. But this would seem even more challenging.

The problem with the exercise above is that none of this constitutes a proof that the approach works. There is an awful lot of should and probably in this argument, and none of this ought to make you very happy. A rough sketch of the proof for an F-O scheme can be found here. (I warn you that it’s probably got some bugs in it, and I’m offering it mainly as an intuition.)

The F-O scheme has many variants. A slightly different and much more formal treatment by Hofheinz and Kiltz can be found here, and deals with some other requirements on the underlying CPA-secure scheme.

To be continued…

So far in this discussion we’ve covered two basic techniques — both at a very superficial level — that achieve CCA2 security under the ridiculously strong assumption that random oracles exist. Unfortunately, they don’t. This motivates the need for better approaches that don’t require random oracles at all.

There are a couple of those that, sadly, nobody uses. Those will have to wait until the next post.

 

 

Was the Efail disclosure horribly screwed up?

Was the Efail disclosure horribly screwed up?

TL;DR. No. Or keep reading if you want.

On Monday a team of researchers from Münster, RUB and NXP disclosed serious cryptographic vulnerabilities in a number of encrypted email clients. The flaws, which go by the cute vulnerability name of “Efail”, potentially allow an attacker to decrypt S/MIME or PGP-encrypted email with only minimal user interaction.

By the standards of cryptographic vulnerabilities, this is about as bad as things get. In short: if an attacker can intercept and alter an encrypted email — say, by sending you a new (altered) copy, or modifying a copy stored on your mail server — they can cause many GUI-based email clients to send the full plaintext of the email to an attacker controlled-server. Even worse, most of the basic problems that cause this flaw have been known for years, and yet remain in clients.

EfailDoc

The big (and largely under-reported) story of EFail is the way it affects S/MIME. That “corporate” email protocol is simultaneously (1) hated by the general crypto community because it’s awful and has a slash in its name, and yet (2) is probably the most widely-used email encryption protocol in the corporate world. The table at the right — excerpted from the paper — gives you a flavor of how Efail affects S/MIME clients. TL;DR it affects them very badly.

Efail also happens to affect a smaller, but non-trivial number of OpenPGP-compatible clients. As one might expect (if one has spent time around PGP-loving folks) the disclosure of these vulnerabilities has created something of a backlash on HN, and among people who make and love OpenPGP clients. Mostly for reasons that aren’t very defensible.

So rather than write about fun things — like the creation of CFB and CBC gadgets — today, I’m going to write about something much less exciting: the problem of vulnerability disclosure in ecosystems like PGP. And how bad reactions to disclosure can hurt us all.

How Efail was disclosed to the PGP community

Putting together a comprehensive timeline of the Efail disclosure process would probably be a boring, time-intensive project. Fortunately Thomas Ptacek loves boring and time-intensive projects, and has already done this for us.

Briefly, the first Efail disclosures to vendors began last October, more than 200 days prior to the agreed publication date. The authors notified a large number of vulnerable PGP GUI clients, and also notified the GnuPG project (on which many of these projects depend) by February at the latest. From what I can tell every major vendor agreed to make some kind of patch. GnuPG decided that it wasn’t their fault, and basically stopped corresponding.

All parties agreed not to publicly discuss the vulnerability until an agreed date in April, which was later pushed back to May 15. The researchers also notified the EFF and some journalists under embargo, but none of them leaked anything. On May 14 someone dumped the bug onto a mailing list. So the EFF posted a notice about the vulnerability (which we’ll discuss a bit more below), and the researchers put up a website. That’s pretty much the whole story.

There are three basic accusations going around about the Efail disclosure. They can be summarized as (1) maintaining embargoes in coordinated disclosures is really hard, (2) the EFF disclosure “unfairly” made this sound like a serious vulnerability “when it isn’t”, and (3) everything was already patched anyway so what’s the big deal.

Disclosures are hard; particularly coordinated ones

I’ve been involved in two disclosures of flaws in open encryption protocols. (Both were TLS issues.) Each one poses an impossible dilemma. You need to simultaneously (a) make sure every vendor has as much advance notice as possible, so they can patch their software. But at the same time (b) you need to avoid telling literally anyone, because nothing on the Internet stays secret. At some point you’ll notify some FOSS project that uses an open development mailing list or ticket server, and the whole problem will leak out into the open.

Disclosing bugs that affect PGP is particularly fraught. That’s because there’s no such thing as “PGP”. What we have instead is a large and distributed community that revolves around the OpenPGP protocol. The pillar of this community is the GnuPG project, which maintains the core GnuPG tool and libraries that many clients rely on. Then there are a variety of niche GUI-based clients and email plugin projects. Finally, there are commercial vendors like Apple and Microsoft. (Who are mostly involved in the S/MIME side of things, and may reluctantly allow PGP plugins.)

Then, of course there are thousands of end-users, who will generally fail to update their software unless something really bad and newsworthy happens.

The obvious solution to the disclosure problem to use a staged disclosure. You notify the big commercial vendors first, since that’s where most of the affected users are. Then you work your way down the “long tail” of open source projects, knowing that inevitably the embargo could break and everyone will have to patch in a hurry. And you keep in mind that no matter what happens, everyone will blame you for screwing up the disclosure.

For the PGP issues in Efail, the big client vendors are Mozilla (Thunderbird), Microsoft (Outlook) and maybe Apple (Mail). The very next obvious choice would be to patch the GnuPG tool so that it no longer spits out unauthenticated plaintext, which is the root of many of the problems in Efail.

The Efail team appears to have pursued exactly this approach for the client-side vulnerabilities. Sadly, the GnuPG team made the decision that it’s not their job to pre-emptively address problems that they view as ‘clients misusing the GnuPG API’ (my paraphrase), even when that misuse appears to be rampant across many of the clients that use their tool. And so the most obvious fix for one part of the problem was not available.

This is probably the most unfortunate part of the Efail story, because in this case GnuPG is very much at fault. Their API does something that directly violates cryptographic best practices — namely, releasing unauthenticated plaintext prior to producing an error message. And while this could be understood as a reasonable API design at design time, continuing to support this API even as clients routinely misuse it has now led to flaws across the ecosystem. The refusal of GnuPG to take a leadership role in preemptively safeguarding these vulnerabilities both increases the difficulty of disclosing these flaws, and increases the probability of future issues.

So what went wrong with the Efail disclosure?

Despite what you may have heard, given the complexity of this disclosure, very little went wrong. The main issues people have raised seem to have to do with the contents of an EFF post. And with some really bad communications from Robert J. Hansen at the Enigmail (and GnuPG) project.

The EFF post. The Efail researchers chose to use the Electronic Frontier Foundation as their main source for announcing the existence of the vulnerability to the privacy community. This hardly seems unreasonable, because the EFF is generally considered a trusted broker, and speaks to the right community (at least here in the US).

The EFF post doesn’t give many details, nor does it give a list of affected (or patched) clients. It does give two pretty mild recommendations:

  1. Temporarily disable or uninstall your existing clients until you’ve checked that they’re patched.
  2. Maybe consider using a more modern cryptosystem like Signal, at least until you know that your PGP client is safe again.

This naturally led to a huge freakout by many in the PGP community. Some folks, including vendors, have misrepresented the EFF post as essentially pushing people to “permanently” uninstall PGP, which will “put lives at risk” because presumably these users (whose lives are at risk, remember) will immediately fall back to sending incriminating information via plaintext emails — rather than temporarily switching their communications to one of several modern, well-studied secure messengers, or just not emailing for a few hours.

In case you think I’m exaggerating about this, here’s one reaction from ProtonMail:

Proton

The most reasonable criticism I’ve heard of the EFF post is that it doesn’t give many details about which clients are patched, and which are vulnerable. This could presumably give someone the impression that this vulnerability is still present in their email client, and thus would cause them to feel less than secure in using it.

I have to be honest that to me that sounds like a really good outcome. The problem with Efail is that it doesn’t matter if your client is secure. The Efail vulnerability could affect you if even a single one of your communication partners is using an insecure client.

So needless to say I’m not very sympathetic to the reaction around the EFF post. If you can’t be sure whether your client is secure, you probably should feel insecure.

Bad communications from GnuPG and Enigmail. On the date of the disclosure, anyone looking for accurate information about security from two major projects — GnuPG and Enigmail — would not have been able to find it.

They wouldn’t have found it because developers from both Enigmail and GnuPG were on mailing lists and Twitter claiming that they had never heard of Efail, and hadn’t been notified by the researchers. Needless to say, these allegations took off around the Internet, sometimes in place of real information that could have helped users (like, whether either project had patched.)

It goes without saying that neither allegation was actually true. In fact, both project members soon checked with their fellow developers (and their memories) and found out that they’d both been given months of notice by the researchers, and that Enigmail had even developed a patch. (However, it turned out that even this patch may not perfectly address the issue, and the community is still working to figure out exactly what still needs to be done.)

This is an understandable mistake, perhaps. But it sure is a bad one.

PGP is bad technology and it’s making a bad community

Now that I’ve made it clear that neither the researchers nor the EFF is out to get the PGP community, let me put on my mask and horns and tell you why someone should be.

I’ve written extensively about PGP on this blog, but in the past I’ve written mostly from a technical point of view about the problems with PGP. But what’s really problematic about PGP is not just the cryptography; it’s the story it tells about path dependence and how software communities work.

The fact of the matter is that OpenPGP is not really a cryptography project. That is, it’s not held together by cryptography.  It’s held together by backwards-compatibility and (increasingly) a kind of an obsession with the idea of PGP as an end in and of itself, rather than as a means to actually make end-users more secure.

Let’s face it, as a protocol, PGP/OpenPGP is just not what we’d develop if we started over today. It was formed over the years out of mostly experimental parts, which were in turn replaced, bandaged and repaired — and then worked into numerous implementations, which all had to be insanely flexible and yet compatible with one another. The result is bad, and most of the software implementing it is worse. It’s the equivalent of a beloved antique sports car, where the electrical system is totally shot, but it still drives. You know, the kind of car where the owner has to install a hand-switch so he can turn the reverse lights on manually whenever he wants to pull out of a parking space.

If PGP went away, I estimate it would take the security community less than a year to entirely replace (the key bits of) the standard with something much better and modern. It would have modern crypto and authentication, and maybe even extensions for future post-quantum future security. It would be simple. Many bright new people would get involved to help write the inevitable Rust, Go and Javascript clients and libraries.

Unfortunately for us all, (Open)PGP does exist. And that means that even fancy greenfield email projects feel like they need to support OpenPGP, or at least some subset of it. This in turn perpetuates the PGP myth, and causes other clients to use it. And as a direct result, even if some clients re-implement OpenPGP from scratch, other clients will end up using tools like GnuPG which will support unauthenticated encryption with bad APIs. And the cycle will go round and around, like a spaceship stuck near the event horizon of a black hole.

And as the standard perpetuates itself, largely for the sake of being a standard, it will fail to attract new security people. It will turn away exactly the type of people who should be working on these tools. Those people will go off and build encryption systems in a totally different area, or they’ll get into cryptocurrency. And — with some exceptions — the people who work in the community will increasingly work in that community because they’re supporting PGP, and not because they’re trying to seek out the best security technologies for their users. And the serious (email) users of PGP will be using it because they like the idea of using PGP better than they like using an actual, secure email standard.

And as things get worse, and fail to develop, people who work on it will become more dogmatic about its importance, because it’s something threatened and not a real security protocol that anyone’s using. To me that’s where PGP is going today, and that is why the community has such a hard time motivating itself to take these vulnerabilities seriously, and instead reacts defensively.

Maybe that’s a random, depressing way to end a post. But that’s the story I see in OpenPGP. And it makes me really sad.