A note on the NSA, the future and fixing mistakes

Readers of this blog will know this has been an interesting couple of days for me. I have very mixed feelings about all this. On the one hand, it’s brought this blog a handful of new readers who might not have discovered it otherwise. On the other hand, it’s made me a part of the story in a way I don’t deserve to be.

After speaking with my colleagues and (most importantly) with my wife, I thought I might use the last few seconds of my inadvertent notoriety to make some of highly non-technical points about the recent NSA revelations and my decision to blog about them.

I believe my first point should be self-evident: the NSA has made a number of terrible mistakes. These range from policy decisions to technical direction, to matters of their own internal security. There may have been a time when these mistakes could have been mitigated or avoided, but that time has passed. Personally I believe it passed even before Edward Snowden made his first contact with the press. But the disclosures of classified documents have set those decisions in stone.

Given these mistakes, we’re now faced with the job of cleaning up the mess. To that end there are two sets of questions: public policy questions — who should the NSA be spying on and how far should they be allowed to go in pursuit of that goal? And a second set of more technical questions: how do we repair the technological blowback from these decisions?

There are many bright people — quite a few in Congress — who are tending to the first debate. While I have my opinions about this, they’re (mostly) not the subject of this blog. Even if they were, I would probably be the wrong person to discuss them.

So my concern is the technical question. And I stress that while I label this ‘technical‘, it isn’t a question of equations and logic gates. The tech sector is one of the fastest growing and most innovative areas of the US economy. I believe the NSA’s actions have caused long-term damage to our credibility, in a manner that threatens our economic viability as well as, ironically, our national security.

The interesting question to me — as an American and as someone who cares about the integrity of speech — is how we restore faith in our technology. I don’t have the answers to this question right now. Unfortunately this is a long-term problem that will consume the output of researchers and technologists far more talented than I. I only hope to be involved in the process.

So while I know there are people at NSA who must be cursing Edward Snowden’s name and wishing we’d all stop talking about this. Too late. I hope that they understand the game we’re playing now. Their interests as well as mine now depend on repairing the damage. Downplaying the extent of the damage, or trying to restrict access to (formerly) classified documents does nobody any good.

It’s time to start fixing things.

On the NSA

Let me tell you the story of my tiny brush with the biggest crypto story of the year.

A few weeks ago I received a call from a reporter at ProPublica, asking me background questions about encryption. Right off the bat I knew this was going to be an odd conversation, since this gentleman seemed convinced that the NSA had vast capabilities to defeat encryption. And not in a ‘hey, d’ya think the NSA has vast capabilities to defeat encryption?’ kind of way. No, he’d already established the defeating. We were just haggling over the details.

Oddness aside it was a fun (if brief) set of conversations, mostly involving hypotheticals. If the NSA could do this, how might they do it? What would the impact be? I admit that at this point one of my biggest concerns was to avoid coming off like a crank. After all, if I got quoted sounding too much like an NSA conspiracy nut, my colleagues would laugh at me. Then I might not get invited to the cool security parties.

All of this is a long way of saying that I was totally unprepared for today’s bombshell revelations describing the NSA’s efforts to defeat encryption. Not only does the worst possible hypothetical I discussed appear to be true, but it’s true on a scale I couldn’t even imagine. I’m no longer the crank. I wasn’t even close to cranky enough.

And since I never got a chance to see the documents that sourced the NYT/ProPublica story — and I would give my right arm to see them — I’m determined to make up for this deficit with sheer speculation. Which is exactly what this blog post will be.

‘Bullrun’ and ‘Cheesy Name’ 

If you haven’t read the ProPublica/NYT or Guardian stories, you probably should. The TL;DR is that the NSA has been doing some very bad things. At a combined cost of $250 million per year, they include:

  1. Tampering with national standards (NIST is specifically mentioned) to promote weak, or otherwise vulnerable cryptography.
  2. Influencing standards committees to weaken protocols.
  3. Working with hardware and software vendors to weaken encryption and random number generators.
  4. Attacking the encryption used by ‘the next generation of 4G phones‘.
  5. Obtaining cleartext access to ‘a major internet peer-to-peer voice and text communications system’ (Skype?)
  6. Identifying and cracking vulnerable keys.
  7. Establishing a Human Intelligence division to infiltrate the global telecommunications industry.
  8. And worst of all (to me): somehow decrypting SSL connections.

All of these programs go by different code names, but the NSA’s decryption program goes by the name ‘Bullrun’ so that’s what I’ll use here.

How to break a cryptographic system

There’s almost too much here for a short blog post, so I’m going to start with a few general thoughts. Readers of this blog should know that there are basically three ways to break a cryptographic system. In no particular order, they are:

  1. Attack the cryptography. This is difficult and unlikely to work against the standard algorithms we use (though there are exceptions like RC4.) However there are many complex protocols in cryptography, and sometimes they are vulnerable.
  2. Go after the implementation. Cryptography is almost always implemented in software — and software is a disaster. Hardware isn’t that much better. Unfortunately active software exploits only work if you have a target in mind. If your goal is mass surveillance, you need to build insecurity in from the start. That means working with vendors to add backdoors.
  3. Access the human side. Why hack someone’s computer if you can get them to give you the key?

Bruce Schneier, who has seen the documents, says that ‘math is good’, but that ‘code has been subverted’. He also says that the NSA is ‘cheating‘. Which, assuming we can trust these documents, is a huge sigh of relief. But it also means we’re seeing a lot of (2) and (3) here.

So which code should we be concerned about? Which hardware?

SSL Servers by OS type. Source: Netcraft.

This is probably the most relevant question. If we’re talking about commercial encryption code, the lion’s share of it uses one of a small number of libraries. The most common of these are probably the Microsoft CryptoAPI (and Microsoft SChannel) along with the OpenSSL library.

Of the libraries above, Microsoft is probably due for the most scrutiny. While Microsoft employs good (and paranoid!) people to vet their algorithms, their ecosystem is obviously deeply closed-source. You can view Microsoft’s code (if you sign enough licensing agreements) but you’ll never build it yourself. Moreover they have the market share. If any commercial vendor is weakening encryption systems, Microsoft is probably the most likely suspect.

And this is a problem because Microsoft IIS powers around 20% of the web servers on the Internet — and nearly forty percent of the SSL servers! Moreover, even third-party encryption programs running on Windows often depend on CAPI components, including the random number generator. That makes these programs somewhat dependent on Microsoft’s honesty.

Probably the second most likely candidate is OpenSSL. I know it seems like heresy to imply that OpenSSL — an open source and widely-developed library — might be vulnerable. But at the same time it powers an enormous amount of secure traffic on the Internet, thanks not only to the dominance of Apache SSL, but also due to the fact that OpenSSL is used everywhere. You only have to glance at the FIPS CMVP validation lists to realize that many ‘commercial’ encryption products are just thin wrappers around OpenSSL.

Unfortunately while OpenSSL is open source, it periodically coughs up vulnerabilities. Part of this is due to the fact that it’s a patchwork nightmare originally developed by a programmer who thought it would be a fun way to learn Bignum division.* Part of it is because crypto is unbelievably complicated. Either way, there are very few people who really understand the whole codebase.

On the hardware side (and while we’re throwing out baseless accusations) it would be awfully nice to take another look at the Intel Secure Key integrated random number generators that most Intel processors will be getting shortly. Even if there’s no problem, it’s going to be an awfully hard job selling these internationally after today’s news.

Which standards?

From my point of view this is probably the most interesting and worrying part of today’s leak. Software is almost always broken, but standards — in theory — get read by everyone. It should be extremely difficult to weaken a standard without someone noticing. And yet the Guardian and NYT stories are extremely specific in their allegations about the NSA weakening standards.

The Guardian specifically calls out the National Institute of Standards and Technology (NIST) for a standard they published in 2006. Cryptographers have always had complicated feelings about NIST, and that’s mostly because NIST has a complicated relationship with the NSA.

Here’s the problem: the NSA ostensibly has both a defensive and an offensive mission. The defensive mission is pretty simple: it’s to make sure US information systems don’t get pwned. A substantial portion of that mission is accomplished through fruitful collaboration with NIST, which helps to promote data security standards such as the Federal Information Processing Standards (FIPS) and NIST Special Publications.

I said cryptographers have complicated feelings about NIST, and that’s because we all know that the NSA has the power to use NIST for good as well as evil. Up until today there’s been no real evidence of malice, despite some occasional glitches — and compelling evidence that at least one NIST cryptographic standard could have contained a backdoor. But now maybe we’ll have to re-evaluate that relationship. As utterly crazy as it may seem.

Unfortunately, we’re highly dependent on NIST standards, ranging from pseudo-random number generators to hash functions and ciphers, all the way to the specific elliptic curves we use in SSL/TLS. While the possibility of a backdoor in any of these components does seem remote, trust has been violated. It’s going to be an absolute nightmare ruling it out.

Which people?

Probably the biggest concern in all this is the evidence of collaboration between the NSA and unspecified ‘telecom providers’. We already know that the major US (and international) telecom carriers routinely assist the NSA in collecting data from fiber-optic cables. But all this data is no good if it’s encrypted.

While software compromises and weak standards can help the NSA deal with some of this, by far the easiest way to access encrypted data is to simply ask for — or steal — the keys. This goes for something as simple as cellular encryption (protected by a single key database at each carrier) all the way to SSL/TLS which is (most commonly) protected with a few relatively short RSA keys.

The good and bad thing is that as the nation hosting the largest number of popular digital online services (like Google, Facebook and Yahoo) many of those critical keys are located right here on US soil. Simultaneously, the people communicating with those services — i.e., the ‘targets’ — may be foreigners. Or they may be US citizens. Or you may not know who they are until you scoop up and decrypt all of their traffic and run it for keywords.

Which means there’s a circumstantial case that the NSA and GCHQ are either directly accessing Certificate Authority keys** or else actively stealing keys from US providers, possibly (or probably) without executives’ knowledge. This only requires a small number of people with physical or electronic access to servers, so it’s quite feasible.*** The one reason I would have ruled it out a few days ago is because it seems so obviously immoral if not illegal, and moreover a huge threat to the checks and balances that the NSA allegedly has to satisfy in order to access specific users’ data via programs such as PRISM.

To me, the existence of this program is probably the least unexpected piece of all the news today. Somehow it’s also the most upsetting.

So what does it all mean?

I honestly wish I knew. Part of me worries that the whole security industry will talk about this for a few days, then we’ll all go back to our normal lives without giving it a second thought. I hope we don’t, though. Right now there are too many unanswered questions to just let things lie.

The most likely short-term effect is that there’s going to be a lot less trust in the security industry. And a whole lot less trust for the US and its software exports. Maybe this is a good thing. We’ve been saying for years that you can’t trust closed code and unsupported standards: now people will have to verify.

Even better, these revelations may also help to spur a whole burst of new research and re-designs of cryptographic software. We’ve also been saying that even open code like OpenSSL needs more expert eyes. Unfortunately there’s been little interest in this, since the clever researchers in our field view these problems as ‘solved’ and thus somewhat uninteresting.

What we learned today is that they’re solved all right. Just not the way we thought.

Notes:

* The original version of this post repeated a story I heard recently (from a credible source!) about Eric Young writing OpenSSL as a way to learn C. In fact he wrote it as a way to learn Bignum division, which is way cooler. Apologies Eric!

** I had omitted the Certificate Authority route from the original post due to an oversight — thanks to Kenny Patterson for pointing this out — but I still think this is a less viable attack for passive eavesdropping (that does not involve actively running a man in the middle attack). And it seems that much of the interesting eavesdropping here is passive.

*** The major exception here is Google, which deploys Perfect Forward Secrecy for many of its connections, so key theft would not work here. To deal with this the NSA would have to subvert the software or break the encryption in some other way.

Is the cryptopocalypse nigh?

I’ve been traveling a bit over the past couple of weeks, so I haven’t had much of a chance to keep up on blogging. One consequence is that I completely missed my chance to say something about, well, anything that happened at BlackHat or Def Con.

Which is too bad, since a surprising amount of crypto stuff did go down! One thing that I wish I’d had a chance to talk about was a presentation at BlackHat called ‘The Factoring Dead‘ by Tom Ritter, Javed Samuel and Alex Stamos. (Thomas Ptacek was also involved, but he insists that he did nothing and they only included him out of pity.)

Although I don’t quite agree with the premise of this presentation, talks like it are fun — in the way that zombie movies are fun — because you get to abandon reality and think about some non-serious things for a while.

Factually, the presentation addresses some new results on the discrete logarithm problem published by Antoine Joux and his co-authors Razvan Barbulescu, Pierrick Gaudry and Emmanuel Thomé — developments the presenters cite as a very serious reason for people to get worried. And we’re not talking about the usual ‘you’re going to use crypto wrong’ kind of worry, but a more earthshaking kind: namely that RSA and Diffie-Hellman and DSA are soon going to be broken altogether.

Now let me be clear that Ritter, Samuel and Stamos and even lame, non-contributing Ptacek (henceforth RSSp) are all smart guys. In fact, I would venture to say they’re probably much more familiar with the Joux et al. work than I am since they seem interested in the details. Me, I like my hardness assumptions the way I like my hamburgers: neatly ground up and without the mooing. I could live my whole life without voluntarily digging into the efficiency of discrete logarithm solvers in fields of small characteristic.

Moreover, it’s hard to really argue with the content of RSSp’s presentation, since the bulk of what they do is to simply present facts. There really have been some major recent advances in solving discrete logarithms over certain special fields. There have been major attacks in the more distant past that took us by surprise. And yes, it would really awesome if people stopped fiddling with 1024-bit RSA keys and moved to elliptic curve crypto.

What’s concerning is the conclusions they (and other sources) have reached: namely, that factoring-based cryptosystems could be dead in just a few years. This kind of thing could incite panic! (I mean, it could if people actually cared about cryptography. Which unfortunately they mostly don’t.)

So let’s spend some time examining this.

Razvan Barbulescu, Emmanuel Thomé
and Antoine Joux hungrily eye a defenseless
discrete logarithm instance.
(Source: Steven Galbraith)

The background

The jumping off point for RSSp’s slides is a set of recent advances made by Joux and subsequently by Barbulescu, Gaudry, Joux and Thomé. The discrete logarithm problem (which you can learn about in this highly informative video) is noteworthy for two reasons. First, it’s believed that in many settings, the discrete logarithm problem is difficult to solve. Second: that assumption is critical to the security of many of the cryptosystems we know and love — for example, Diffie-Hellman and DSA.

Now the Joux and Barbulescu et al. results are important work, and really do deserve attention from cryptographers and non-cryptographers alike. What they show is that there exist relatively efficient algorithms for solving discrete logarithms in certain very specific types of field. Even more amazingly, the new algorithms are efficient enough to actually implement and run — against parameters that were previously thought to have cryptographic security!

Indeed this has already had some (limited) impact on practitioners in the research community. For example, many of the pairing-based cryptography libraries I work with ship with parameters that are now deemed to be too risky thanks to these new attacks. However — and this is key — these are research libraries. To my knowledge, none of these fields is actually being used in deployment, let alone standardized cryptography.

In other words, this is the kind of result that should receive (and has received!) lots of attention from cryptographers. But not necessarily from people who use cryptography. And here’s why.

You see, while the Joux and Barbulescu et al. algorithms really are efficient, they only work in fields with very specific properties. Namely, the fields must have small characteristic. Indeed, this feature of the field is critical to certain key steps of the algorithm. Take this property away and you still get some advances over the previous state of the art, but the advances are decidedly more theoretical.

Which brings us to the payoff: all of the fields we use to implement most cryptography — things like (non-elliptic-curve) DSA, Diffie-Hellman, and even the fields we use to implement NIST standard elliptic curves — are prime fields and hence don’t have the necessary properties to make the Joux results meaningful. Hence these attacks don’t seem to apply. Moreover there’s really no good reason to believe that they will anytime soon.

The BlackHat presentation

Which brings us to the RSSp BlackHat presentation. The overall premise of RSSp’s presentation is that advances happen rapidly. It’s not unprecedented for theoretical attacks in the literature to rapidly morph into real things that keep security engineers up at night. They also point out that attacks on the DLP have closely tracked attacks on factoring, both in the classical and the quantum world. (Ok, they don’t say the last part but it’s also true.)

RSSp also correctly imply that we should be switching away from cryptosystems that rely on the hardness of the (field-based) discrete logarithm problem, and should instead be moving to cryptosystems based on the elliptic curve discrete logarithm problem (ECDLP).* This is because none of the efficient attacks on DLP — including Joux’s algorithms — seem to apply in the (standardized) EC setting.

Lastly, they correctly point out that cryptosystems based on factoring and (field-based) Discrete Logarithms are already being deprecated by organizations like NIST for a variety of good — though not panic-related — reasons. Mostly this is because our current pre-Joux algorithms against those settings have made it too costly to get long-term (256-bit) security; you just need enormous keys. This was the case before Joux came along, and it’s still the case now.

The last point RSSp make is also absolutely correct: we should be switching to elliptic curve cryptography (ECC) as soon as possible, in part just so people can start using high-security cryptosystems without paying performance and bandwidth through the nose for the privilege. This isn’t totally academic, since — as Thomas Ptacek reminds me — we’re getting close to the death of 1024-bit keys. If your adversary is the NSA, anyway.

(It also doesn’t hurt that getting more people on this bandwagon will reduce the number of people rolling their own RSA implementation.)

So should we stock up on canned goods and move to Vermont?

Vermont is lovely. But you shouldn’t move there because of this presentation.

In fact this is hardly the first time we’ve seen a major breakthrough against an important cryptographic problem. In the 90s it was fast number field sieving against factoring-based systems and slightly later, things like the MOV attack on the ECDLP. In both cases, there was a small degree of panic, but ultimately a pretty mild result: cryptographers carefully examined the attack and chose new parameters that made it impractical. Then everyone went back to business.

In this case it looks like we’ve already got a set of parameters that keep us safe, so it’s even more unlikely — that except for a few researchers doing what researchers do — any of us will have to think about this in three years or five or even ten.

And by the way, you should not believe this because I say so — that would be foolish. You should believe it because the people who work in this area also don’t seem to think it’s an issue. If you doubt this, go to CRYPTO this week look for people running around with their hair on fire. The number should be barely higher than usual.

What would we do if there was a real cryptpocalypse?

Right! If we’re going to gin up a cryptpocalypse let’s have a real one. What if in 2015, Joux and his co-authors publish a new algorithm that efficiently solves the DLP in prime fields and at realistic key sizes, and moreover has a factoring analog that breaks RSA? Well, this would certainly be very bad for everything we’ve encrypted in the past, but at least we’d have an immediate solution: a rapid transition to elliptic curve crypto. Whew!

But this is not much fun: like watching an alien invasion movie where the aliens are allergic to water.

So let’s go way out on a limb and imagine that in 2017, after everyone has piled into ECC, Joux et al. and a team from the University of Waterloo team up to publish a revolutionary new attack that reduces ECDLP to roughly the hardness of field-based DLP. What would happen then?

Well, this would be really bad.

Let me reiterate that there’s a reason we like our current batch of public-key cryptosystems — EC, field-based and factoring-based systems. They’re relatively easy to understand, they’ve all been studied quite a bit. But most importantly: they’re really efficient.

Once you leave this domain you enter a region that the maps label with ‘here be dragons‘. Not because this space is empty. It’s just that there are relatively few efficient schemes that have received anywhere near the level of study that our beloved ones have.

Probably the oldest and most familiar of the alternative encryption schemes is the McEliece cryptosystem, which was developed way back in 1978 (that’s one year after RSA, in case you’re keeping score at home). McEliece and its modern variants are based on problems in algebraic coding theory: they depend for security on the hardness of decoding general codes, as well as some assumptions about the specific code used.

McEliece is surprisingly fast and (so far as we know) quite secure. There’s only one major problem: the public keys are big. According to a 2008 analysis by Bernstein, Lange and Peters, achieving security equivalent to a 3072-bit RSA key (aka the ‘128 bit’ symmetric-equivalent security level) requires a stunning 187 kilobyte McEliece public key. Moving up to 256-bit security — notably hard even for RSA — takes this to nearly 1MB. Recent improvements may cut that down a bit, but they’re still relatively unstudied.

Another possibility is to use Lattice-based cryptosystems. While there are several in the research literature, one of the most studied is the NTRU cryptosystem. I won’t confess to caring much about NTRU, except to note that it’s relatively well-studied by the standards of such alternative schemes and even shows up in some standards. Unfortunately that doesn’t mean everyone loves it. The inventors also hold a patent on it.

Lastly, for signatures at least we can always fall back on old standbys such as hash based signatures, which should hold us as long as Joan Daemen’s team can think up new hash functions.

Conclusion

We live in frightening times and yes, it’s always tempting to peek under the bed and imagine scary monsters. In practice, the reality is probably a bit more mundane.

As much as we love to read stories of solitary mathematicians making revolutionary leaps in their unlit apartment, this is rarely how things go. Even the most significant advances are usually telegraphed via a series of public, widely-read research papers.

In other words: when RSA and DSA really are in danger you’ll know about it. Just look for a bunch of cryptographers running around with their hair on fire.

Notes:

* By ‘based on’ I don’t mean that these cryptosystems necessarily reduce to the ECDLP, but rather that their security depends upon the hardness of the ECDLP.

TweetNaCl

About a year ago I got into a discussion on Twitter with a couple of other cryptographers. The subject: why do so many software developers use lazy cryptography?

The instigation for this discussion was actually a piece of malware – a popular, widespread botnet that forgot to use digital signatures to sign its control messages. Though I know it’s wrong to complain about broken malware, at the same time: these idiots had money on the line! If they couldn’t bother to properly secure their software, how can we possibly expect, say, Evernote to get it right?

And indeed, the more you look at software, the more you realize this problem cuts across all types of software development. People suddenly find themselves in a position where they could really use strong cryptography (be it ciphers, signatures, collision-resistant hash functions) but instead of buckling down and doing it right, the developers inevitably opt for something terrible.

Terrible can mean a lot of things in this context. Mostly it means using no crypto at all. Sometimes it’s grabbing an RC4 implementation off of Wikipedia. Or copying something from Applied Cryptography. Or in a few borderline cases, grabbing the MS Crypto API and thus getting stuck with a basket of legacy algorithms thanks to ancient Windows XP compatibility issues.

Anyway, the problem — it seemed to us at the time — wasn’t just a lack of knowledge, or even  an absence of good crypto libraries. Rather, the problem was that you had to use libraries. If your developer has hit the point where s/he’s willing to copy and paste RC4 from Wikipedia, you’re already in a kind of Fifth Dimension of laziness. Nobody’s going to pull in NaCl or OpenSSL just to encrypt one little blob of text.

The output of all this thinking was a grand vision. If we couldn’t bring the developers to the crypto, we would bring the crypto to the developers. Specifically, we’d develop an easy to use crypto library small enough that you could copy and paste it right into your code. Since we were already using Twitter, we had a metric for complexity right there: the entire library would have to fit in a small number of Tweets. Jean-Philippe Aumasson (developer of the SHA3 finalist BLAKE) even laid the cornerstone by developing TweetCipher, an entirely new authenticated cipher that fits in about six Tweets.

It was an exciting time. We dreamed big! We made big plans! Then – this being Twitter – we got distracted by something shiny and forgot all about it.

Fortunately others have risen to carry on our great work. Tonight marks the publication of TweetNaCl, a re-implementation of the NaCl crypto library that fits in exactly 100 Tweets. TweetNaCl is brought to you by the same people who wrote NaCl, which means that it’s probably quite good. More to the point, it’s actually compatible with NaCl, which means it uses standard and well-vetted algorithms. (Here’s the code in indented, non-Tweet form.)

Would I recommend you use TweetNaCl? Well, no. I’d recommend you use the real NaCl. But if you do find yourself with a crypto problem, and it looks like you’re just about to copy code from Wikipedia…. Then maybe TweetNaCl could be just the thing.

Can Apple read your iMessages?

About a year ago I wrote a short post urging Apple to publish the technical details of iMessage encryption. I’d love tell you that Apple saw my influential crypto blogging and fell all over themselves to produce a spec, but, no. iMessage is the same black box it’s always been.

What’s changed is that suddenly people seem to care. Some of this interest is due to Apple’s (alleged) friendly relationship with the NSA. Some comes from their not-so-friendly relationship with the DEA. Whatever the reason, people want to know which of our data Apple has and who they’re sharing it with.

And that brings us back to iMessage encryption. Apple runs one of the most popular encrypted communications services on Earth, moving over two billion iMessage every day. Each one is loaded with personal information the NSA/DEA would just love to get their hands on. And yet Apple claims they can’t. In fact, even Apple can’t read them:

There are certain categories of information which we do not provide to law enforcement or any other group because we choose not to retain it.

For example, conversations which take place over iMessage and FaceTime are protected by end-to-end encryption so no one but the sender and receiver can see or read themApple cannot decrypt that data.

This seems almost too good to be true, which in my experience means it probably is. My view is inspired by something I like to call “Green’s law of applied cryptography”, which holds that applied cryptography mostly sucks. Crypto never offers the unconditional guarantees you want it to, and when it does your users suffer terribly.

And that’s the problem with iMessage: users don’t suffer enough. The service is almost magically easy to use, which means Apple has made tradeoffs — or more accurately, they’ve chosen a particular balance between usability and security. And while there’s nothing wrong with tradeoffs, the particulars of their choices make a big difference when it comes to your privacy. By witholding these details, Apple is preventing its users from taking steps to protect themselves.

The details of this tradeoff are what I’m going to talk about in this post. A post which I swear will be the last post I ever write on iMessage. From here on out it’ll be ciphers and zero knowledge proofs all the way.

Apple backs up iMessages to iCloud

That’s the super-secret
NSA spying chip.
The biggest problem with Apple’s position is that it just plain isn’t true. If you use the iCloud backup service to back up your iDevice, there’s a very good chance that Apple can access the last few days of your iMessage history.
For those who aren’t in the Apple ecosystem: iCloud is an optional backup service that Apple provides for free. Backups are great, but if iMessages are backed up we need to ask how they’re protected. Taking Apple at their word — that they really can’t get your iMessages — leaves us with two possibilities:
  1. iMessage backups are encrypted under a key that ‘never leaves the device‘.
  2. iMessage backups are encrypted using your password as a key.

Unfortunately neither of these choices really works — and it’s easy to prove it. All you need to do is run the following simple experiment: First, lose your iPhone. Now change your password using Apple’s iForgot service (this requires you to answer some simple security questions or provide a recovery email). Now go to an Apple store and shell out a fortune buying a new phone.

If you can recover your recent iMessages onto a new iPhone — as I was able to do in an Apple store this afternoon — then Apple isn’t protecting your iMessages with your password or with a device key. Too bad. (Update 6/27: Ashkan Soltani also has some much nicer screenshots from a similar test.)

The sad thing is there’s really no crypto to understand here. The simple and obvious point is this: if I could do this experiment, then someone at Apple could have done it too. Possibly at the request of law enforcement. All they need are your iForgot security questions, something that Apple almost certainly does keep.* 

Apple distributes iMessage encryption keys

But maybe you don’t use backups. In this case the above won’t apply to you, and Apple clearly says that their messages are end-to-end encrypted. The question you should be asking now is: encrypted to whom?

The problem here is that encryption only works if I have your encryption key. And that means before I can talk to you I need to get hold of it. Apple has a simple solution to this: they operate a directory lookup service that iMessage can use to look up the public key associated with any email address or phone number. This is great, but represents yet another tradeoff: you’re now fundamentally dependent on Apple giving you the right key.

HTTPS request/response containing a
“message identity key” associated with an
iPhone phone number (modified). These keys are
sent over SSL.

The concern here is that Apple – or a hacker who compromises Apple’s directory server – might instead deliver their own key. Since you won’t know the difference, you’ll be encrypting to that person rather than to your friend.**

Moreover, iMessage lets you associate multiple public keys with the same account — for example, you can you add a device (such as a Mac) to receive copies of messages sent to your phone. From what I can tell, the iMessage app gives the sender no indication of how many keys have been associated with a given iMessage recipient, nor does it warn them if the recipient suddenly develops new keys.

The practical upshot is that the integrity of iMessage depends on Apple honestly handing out keys. If they cease to be honest (or if somebody compromises the iMessage servers) it may be possible to run a man-in-the-middle attack and silently intercept iMessage data.

Now to some people this is obvious, and to other’s it’s no big deal. All of which is fine. But people should at least understand the strengths and weaknesses of the particular design that Apple has chosen. Armed with that knowledge they can make up their minds how much they want to trust Apple.

Apple can retain metadata

While Apple may encrypt the contents of your communication, their statement doesn’t exactly rule out the possibility they store who you’re talking to. This is the famous meta-data the NSA already sweeps up and (as I’ve said before) it’s almost impossible not to at least collect this information, especially since Apple actually delivers your messages through their servers.

This metadata can be as valuable as the data itself. And while Apple doesn’t retain the content of your messages, their statement says nothing about all that metadata.

Apple doesn’t use Certificate Pinning

As a last – and fairly minor point – iMessage client applications (for iPhone and Mac) communicate with Apple’s directory service using the HTTPS protocol. (Note that this applies to directory lookup messages: the actual iMessages are encrypted separately and travel over XMPP Apple’s push network protocol.)

Using HTTPS is a good thing, and in general it provides strong protections against interception. But it doesn’t protect against all attacks. There’s still a very real possibility that a capable attacker could obtain a forged certificate (possibly by compromising a Certificate Authority) and thus intercept or modify communications with Apple.

This kind of thing isn’t as crazy as it sounds. It happened to hundreds of thousands of Iranian Gmail users, and it’s likely to happen again in the future. The standard solution to this problem is called ‘certificate pinning‘ — this essentially tells the application not to trust unknown certificates. Many apps such as Twitter do this. However based on the testing I did while writing this post, Apple doesn’t.

Conclusion

I don’t write any of this stuff because I dislike Apple. In fact I love their products and would bathe with them if it didn’t (unfortunately) violate the warranty.

But the flipside of my admiration is simple: I rely on these devices and want to know how secure they are. I see absolutely no downside to Apple presenting at least a high-level explanation to experts, even if they keep the low-level details to themselves. This would include the type and nature of the encryption algorithms used, the details of the directory service and the key agreement protocol.

Apple may Think Different, but security rules apply to them too. Sooner or later someone will compromise or just plain reverse-engineer the iMessage system. And then it’ll all come out anyway.

Notes:

* Of course it’s possible that Apple is using your security questions to derive an encryption key. However this seems unlikely. First because it’s likely that Apple has your question/answers on file. But even if they don’t, it’s unlikely that many security answers contain enough entropy to use for encryption. There are only so many makes/models of cars and so many birthdays. Apple’s 2-step authentication may improve things if you use it — but if so Apple isn’t saying.

** In practice it’s not clear if Apple devices encrypt to this key directly or if they engage in an OTR-like key exchange protocol. What is clear is that iMessage does not include a ‘key fingerprint’ or any means for users to verify key authenticity, which means fundamentally you have to trust Apple to guarantee the authenticity of your keys. Moreover iMessage allows you to send messages to offline users. It’s not clear how this would work with OTR.

How to ‘backdoor’ an encryption app

Over the past week or so there’s been a huge burst of interest in encryption software. Applications like Silent Circle and RedPhone have seen a major uptick in new installs. CryptoCat alone has seen a zillion new installs, prompting several infosec researchers to nearly die of irritation.

From my perspective this is a fantastic glass of lemonade, if one made from particular bitter lemons. It seems all we ever needed to get encryption into the mainstream was… ubiquitous NSA surveillance. Who knew?

Since I’ve written about encryption software before on this blog, I received several calls this week from reporters who want to know what these apps do. Sooner or later each interview runs into the same question: what happens when somebody plans a crime using one of these? Shouldn’t law enforcement have some way to listen in?

This is not a theoretical matter. The FBI has been floating a very real proposal that will either mandate wiretap backdoors in these systems, or alternatively will impose fines on providers that fail to cough up user data. This legislation goes by the name ‘CALEA II‘, after the CALEA act which governs traditional (POTS) phone wiretapping.

Personally I’m strongly against these measures, particularly the ones that target client software. Mandating wiretap capability jeopardizes users’ legitimate privacy needs and will seriously hinder technical progress in this area. Such ‘backdoors’ may be compromised by the very same criminals we’re trying to stop. Moreover, smart/serious criminals will easily bypass them.

To me, a more interesting question is how such ‘backdoors’ would even work. This isn’t something you can really discuss in an interview, which is why I decided to blog about them. The answers range from ‘dead stupid‘ to ‘diabolically technical‘, with the best answers sitting somewhere in the middle. Even if many of these are pretty obvious from a technical perspective, we can’t really have a debate until they’ve all been spelled out.

And so: in the rest of this post I’m going to discuss five of the most likely ways to add backdoors to end-to-end encryption systems.

1. Don’t use end-to-end encryption in the first place (just say you do)

There’s no need to kick down the door when you already have the keys. Similarly there’s no reason to add a ‘backdoor’ when you already have the plaintext. Unfortunately this is the case for a shocking number of popular chat systems — ranging from Google Talk (er, ‘Hangouts’) to your typical commercial Voice-over-IP system. The same statement also applies to at least some components of more robust systems: for example, Skype text messages.

Many of these systems use encryption at some level, but typically only to protect communications from the end user to the company’s servers. Once there, the data is available to capture or log to your heart’s content.

2. Own the directory service (or be the Certificate Authority)

Fortunately an increasing number of applications really do encrypt voice and text messages end-to-end — meaning that the data is encrypted all the way from sender directly to the recipient. This cuts the service out of the equation (mostly), which is nice. But unfortunately it’s only half the story.

The problem here is that encrypting things is generally the easy bit. The hard part is distributing the keys (key signing parties anyone?) Many ‘end-to-end’ systems — notably Skype*, Apple’s iMessage and Wickr — try to make your life easier by providing a convenient ‘key lookup service’, or else by acting as trusted certificate authorities to sign your keys. Some will even store your secret keys.**

This certainly does make life easier, both for you and the company, should it decide to eavesdrop on you. Since the service controls the key, it can just as easily send you its own public key — or a public key belonging to the FBI. This approach makes it ridiculously easy for providers to run a Man-in-the-Middle attack (MITM) and intercept any data they want.

This is always ‘best’ way to distinguish seirous encryption systems from their lesser cousins. When a company tells you they’re encrypting end-to-end, just ask them: how are you distributing keys? If they can’t answer — or worse, they blabber about ‘military grade encryption’ — you might want to find another service.

3. Metadata is the new data

The best encryption systems push key distribution offline, or even better, perform a true end-to-end key exchange that only involves the parties to the communication. The latter applies to several protocols — notably OTR and ZRTP — used by apps like Silent Circle, RedPhone and CryptoCat.

You still have to worry about the possibility that an attacker might substitute her own key material in the connection (an MITM attack). So the best of these systems add a verification phase in which the parties check a key fingerprint — preferably in person, but possibly by reading it over a voice connection (you know what your friends’ voice sounds like, don’t you?) Some programs will even convert the fingerprint into a short ‘authentication string‘ that you can read to your friend.

From a cryptographic perspective the design of these systems is quite good. But you don’t need to attack the software to get useful information out of them. That’s because while encryption may hide what you say, it doesn’t necessarily hide who you’re talking to.

The problem here is that someone needs to move your (encrypted) data from point A to point B. Typically this work is done by a server operated by the company that wrote the app. While the server may not be able to eavesdrop you, it can easily log the details (including IP addresses) of each call. This is essentially the same data the NSA collects from phone carriers.

Particularly when it comes to VoIP (where anonymity services like Tor just aren’t very effective), this is a big problem. Some companies are out ahead of it: Silent Circle (a company whose founders have threatened chew off their own limbs rather than comply with surveillance orders) don’t log any IP addresses. One hopes the other services are as careful.

But even this isn’t perfect: just because you choose not to collect doesn’t mean you can’t. If the government shows up with a National Security Letter compelling your compliance — or just hacks your servers — that information will obtained.

4. Escrow your keys

If you want to add real eavesdropping backdoors to a properly-designed encryption protocol you have to take things to a whole different level. Generally this requires that you modify the encryption software itself.

If you’re doing this above board you’d refer to it as ‘key escrow‘. A simple technique is just to an extra field to the wire protocol. Each time your clients agree on a session key, you have one of the parties encrypt that key under the public key of a third party (say, the encryption service, or a law enforcement agency). The encrypted key gets shipped along with the rest of the handshake data. PGP used to provide this as an optional feature, and the US government unsuccessfully tried to mandate an escrow-capable system called Clipper.***

In theory key escrow features don’t weaken the system. In practice this is debatable. The security of every connection now depends on the security of your master ‘escrow’ secret key. And experience tells us that wiretapping systems are surprisingly vulnerable. In 2009, for example, a group of Chinese hackers were able to breach the servers used to manage Google’s law enforcement surveillance infrastructure — giving them access to confidential data on every target the US government was surveilling.

One hopes that law enforcement escrow keys would be better secured. But they probably won’t be.

5. Compromise, Update, Exfiltrate

But what if your software doesn’t have escrow functionality? Then it’s time to change the software.

The simplest way to add an eavesdropping function is just to issue a software update. Ship a trustworthy client, ask your users to enable automatic updates, then deliver a new version when you need to. This gets even easier now that some operating systems are adding automatic background app updates.

If updates aren’t an option, there are always software vulnerabilities. If you’re the one developing the software you have some extra capabilities here. All you need to do is keep track of a few minor vulnerabilities in your server-client communication protocol — which may be secured by SSL and thus protected from third party exploits. These can be weaknesses as minor as an uninitialized memory structure or a ‘wild read’ that can be used to scan key material.

Or better yet, put your vulnerabilities in at the level of the crypto implementation itself. It’s terrifyingly easy to break crypto code — for example, the difference between a working random number generator and a badly broken one can be a single line of code, or even a couple of instructions. Re-use some counters in your AES implementation, or (better yet) implement ECDSA without a proper random nonce. You can even exflitrate your keys using a subliminal channel.

Or just write a simple exploit like the normal kids do.

Unfortunately there’s very little we can do about things like this. Probably the best defense is to use open source code, disable software updates until others have reviewed them, and then pray you’re never the target of a National Security Letter. Because if you are — none of this crap is going to save you.

Conclusion

I hope nobody comes away with the wrong idea about any of this. I wouldn’t seriously recommend that anyone add backdoors to a piece of encryption software. In fact, this is just about the worst idea in the world.

That said, encryption software is likely to be a victim of its own success. Either we’ll stay in the technical ghetto, with only a few boring nerds adopting the technology. Or the world will catch on. And then the pressure will come. At that point the authors of these applications are going to face some tough choices. I don’t envy them one bit.

Notes:

* See this wildly out of date security analysis (still available on Skype’s site) for a description of how this system worked circa 2005.

** A few systems (notably Hushmail back in the 90s) will store your secret keys encrypted under a password. This shouldn’t inspire a lot of confidence, since passwords are notoriously easy to crack. Moreover, if the system has a ‘password recovery’ service (such as Apple’s iForgot) you can more or less guarantee that even this kind of encryption isn’t happening.

*** The story of Clipper (and how it failed) is a wonderful one. Go read Matt Blaze’s paper.

On cellular encryption

If you’re interested in technology/privacy issues then you probably heard last week’s big news out of the Boston Marathon case. It comes by way of former FBI agent Tim Clemente, who insists that our government routinely records all domestic phone calls.

Clemente’s claim generated lots of healthy skepticism. This isn’t because the project is technically infeasible (the numbers mostly add up), or because there’s no precedent for warrantless wiretapping. To me the most convincing objection was simple: it’d be hard to keep secret.* Mostly for boring phone company reasons.

But this led to another interesting discussion. What if we forget local phone eavesdropping and focus on an ‘easier’ problem: tapping only cellular phone calls.
Cellular eavesdropping seems a lot more tractable, if only because mobile calls are conducted on a broadcast channel. That means you can wiretap with almost no carrier involvement. In fact there’s circumstancial evidence that this already happening — just by different parties than you’d think. According to a new book by reporters Marc Ambinder and Dave Brown:

The FBI has quietly removed from several Washington, D.C.–area cell phone towers, transmitters that fed all data to wire rooms at foreign embassies.

This raises a few questions: once you’ve tapped someone’s cellular signals, what do you do with the data? Isn’t it all encrypted? And how easy would it be for the US or some foreign government to lay their hands on the plaintext?

All of this which serves as a wonderful excuse to noodle about the state of modern cellular encryption. Be warned that this is not going to be a short post! For those who don’t like long articles, here’s the TL;DR: cellular encryption is a whole lot worse than you think.

GSM

GSM is the granddaddy of all digital cellular protocols, and it remains of the most popular protocols in the world. One thing that makes GSM special is its call encryption capability: the protocol is designed to encrypt all calls in between the handset and the local tower.

Call encryption is facilitated by a long-term secret key (call it K) that’s stored within the tamper-resistant SIM card in your GSM phone. Your carrier also has a copy of this key. When your GSM phone connects to a tower, the parties execute the following authentication and key agreement protocol:

GSM authentication and key agreement protocol (source). MS represents the ‘mobile station’ (phone) and
HLR is the ‘home location register’, a central database. The MS and HLR combine a long-term secret Ki
with a random nonce RAND to create the shared communication key Kc. A3 and A8 are
typically implemented using the COMP128 function.
The interaction above serves two purposes: first, both the phone and carrier (HLR) derive a pair of values that will be used for authentication and key agreement. This includes a session key Kc as well as a short authentication token SRES that the phone will use to authenticate itself to the tower. Derivation is performed using two functions (A3 and A8) that accept the long-term secret K and with a random nonce RAND.
There are a handful of well-known problems with the GSM security protocols. In no particular order, they are:
  1. Lack of tower authentication. GSM phones authenticate to the tower, but the tower doesn’t authenticate back. This means that anyone can create a ‘fake’ tower that your phone will connect to. The major problem here is that in GSM, the tower gets to pick the encryption algorithm! That means your attacker can simply turn encryption off (by setting encryption ‘algorithm’ A5/0) and simply route the cleartext data itself.In theory your phone is supposed to alert you to this kind of attack, but the SIM chip contains a bit that can de-active the warning. And (as researcher Chris Paget discovered) carriers often set this bit.
  2. Bad key derivation algorithms. The GSM ciphers were developed using the ‘make stuff up and pray nobody sees it‘ school of cryptographic algorithm design. This is a bad approach, since it’s incredibly hard to keep algorithms secret — and when they do leak, they tend to break badly. This was the case for the original A3/A8 algorithms, which are both implemented using single function called COMP128-1. Unfortunately COMP128 turns out to be seriously broken — to the point where you can clone a user’s SIM key in as few as 8 queries.
  3. Bad encryption algorithms. Fortunately it’s easy to replace COMP128-1 by swapping out your SIM card. Unfortunately the situation is much worse for GSM’s A5/1 call encryption cipher, which is embedded in the hardware of most handsets and tower equipment. A5/1 was leaked around the same time as COMP128 and rapidly succumbed to a series of increasingly powerful attacks. Today it’s possible to conduct an efficient known-plaintext attack on A5/1 using a series of rainbow tables you can obtain from BitTorrent. The upshot is that most A5/1 calls can now be decrypted on a high-end PC.
  4. Terrible encryption algorithms. But it’s actually worse than that. GSM phones support an ‘export weakened‘ variant called A5/2, which is so weak you can break it in real time. The worst part is that A5/2 uses the same key as A5/1 — which means an active attacker (see #1 above) can briefly activate the A5/2 mode, crack to recover the encryption key, then switch back to A5/1 with a known key. This is much faster than attacking A5/1 directly, and allows eavesdroppers to intercept incoming phone calls from a legitimate (A5/1 supporting) tower.
Alleged ‘Stingray’ tracking device
mounted on an SUV (source).

Another unfortunate aspect of the GSM protocol is that you don’t need to attack the crypto to do useful things. For example, if all you want to do is determine which devices area in an area, you simply present yourself as a valid tower — and see which phones connect to you (by sending their IMSI values). This is the approach taken by IMSI-catchers like Stingray.

Now the ‘good news’ here is that attacks (1), (2) and (4) require active involvement by the attacker. This means they have to be after you specifically and — at least in principal — they’re detectable if you know what to look for. (You don’t.) However, the weaknesses in A5/1 are a whole different kettle of fish. They permit decryption of even passively recorded A5/1 GSM calls (in real time, or after the fact) even to an attacker with modest resources.

3G/4G/LTE

A valid response to the points above is to note that GSM is nearly 30 years old. You probably wouldn’t blame today’s Ford execs for the crash performance of a 1982 Ford Escort, and similarly you shouldn’t hold the GSM designers responsible for a 1980s protocol — even if billions of people still rely on it.

Overview of the 3G AKA protocol. Look familiar?
The great news is that modern phones often support the improved ‘3G’ (e.g., UMTS) or LTE standards. These offer a bundle of improvements that substantially improve security over the original GSM. These can be summed up as follows:
  1. Mutual authentication. The 3G protocols use a new ‘Authentication and Key Agreement‘ (AKA) protocol, which adds mutual authentication to the tower connection. To validate that the phone is speaking to a legitimate tower, the carrier now computes a MAC that the phone can verify before initiating a connection. This prevents many of the uglier protocol attacks that plagued GSM.
  2. Better authentication algorithms. The session keys and authentication tags are still computed using proprietary algorithms — now called f1-f5, f5* — but the algorithms are purportedly much stronger. Since their design is carrier-specific it’s not easy to say exactly how they work. However this 3GPP recommendation indicates that they might be based on a block cipher like AES.
  3. Better encryption. Call encryption in 3G uses a proprietary block cipher called KASUMI. KASUMI is based off of a Mitsubishi proposal called MISTY1, which was heavy customized to make it faster in cellular hardware.

The biggest source of concern for 3G/LTE is that you may not be using it. Most phones are programmed to gracefully ‘fail over’ to GSM when a 3G/4G connection seems unavailable. Active attackers exploit this feature to implement a rollback attack — jamming 3G/4G connections, and thus re-activating all of the GSM attacks described above.

A more subtle concern is the weakness of the KASUMI cipher. Unfortunately KASUMI seems much weaker than the original MISTY1 algorithm — so much weaker that in 2010 Dunkelman, Keller and Shamir were able to implement a related-key attack that recovered a full 128 bit call key in just under two hours!

Now before you panic, you should know that executing this attack requires a substantial amount of data, all of which must all be encrypted under highly unrealistic attack conditions. Still, it’s interesting to note that the same attack fails completely when applied to the original MISTY1 design.

Top: generation of keys (CK, IK) and authentication tags AUTN, XRES using the functions f1-f5. Bottom: one proposed implementation of those functions using a 128-bit block cipher Ek. This could be AES (source).
And yes, it looks complicated to me, too.

What if you have already the keys?

 The encryption flaws above seem pretty significant, and they really are — if you’re a private eavesdropper or a foreign government. For US national security organizations they’re probably beside the point. It seems unlikely that the NSA would have to ‘break’ any crypto at all.

This is because both the GSM and AKA protocols lack an important property known as forward secrecy. What this means is that if I can record an encrypted call, and later obtain the long-term key K for that phone, then I can still reliably decrypt the whole communication — even months or years later. (Protocols such as Diffie-Hellman or ECMQV prevent this.) Worse, for cellular conversations I can do it even if I only have one half (the tower side) of the communication channel.

Unfortunately I don’t think you have to be a conspiracy theorist to suppose that the carriers’ key databases are probably stored somewhere in Ft. Meade. Thus to the truly paranoid: stop talking on cellphones.

In conclusion

Sometimes conspiracy theories can be fun. Sometimes they’re downright creepy.

For me this discussion veers towards the ‘creepy’. Not so much because I think the NSA really is tapping all our cellphones (I suspect they just read our Facebook). Rather, the creepiness is in seeing just how vulnerable our privacy infrastructure is, even to people who are far less capable than the NSA.

As a result, your data isn’t just available to nation states: it’s also potentially available to that goofball neighbor who bought an IMSI-catcher off the Internet. Or to your business competitor. Or even to that one girl who finally got GnuRadio to compile.

We’re rapidly approaching a technological crossroads, one where we need to decide if we’re going to keep trusting others to protect our data — or if we’re going to take charge of it and treat carriers as the dumb and insecure pipes that they are. I suspect that we’re well on our way towards this world — and sadly, so does the FBI. It’ll be interesting to see how things play out.

Notes:

The argument is that recording all local calls in real time is a bigger job than just splitting major trunks or re-routing a few specific targets. It would seem to require significant changes to the US phone infrastructure. This is particularly true at the local office, where you’d need to upgrade and provision thousands of devices to record all calls passing through them. Building this capability is possible, but would require a lot of effort — not to mention the complicity of hundreds (or thousands) of collaborators at the local telcos. And that means a whole lot of people keeping a very big secret. People aren’t so good at that.

Zerocoin: making Bitcoin anonymous

Wow, what the heck is going on with Bitcoin?zerocoin

When I started this post, the value of a single bitcoin had surged upwards of $250. It’s corrected a bit since then (down $100 or so), but it’s pretty clear that we live in a very different world than we did two weeks ago.

And I’m not sure I really like this world. It’s a world where I have to listen to CNBC reporters try to understand Bitcoin. Ouch. I think we can all agree that we were better off before this happened.

The explosion of interest in Bitcoin is both wonderful and terrible. It’s wonderful because Bitcoin is an amazing technical innovation — the first decentralized electronic currency to actually make something of itself. It’s terrible because Bitcoin has some technical rough edges that really need to be filed off before we start using it for anything.

The rough edge that particularly interests me is user privacy. Or rather, Bitcoin’s troubling lack of it.

In this post I’m going to describe a new piece of research out of my lab at Johns Hopkins that provides one potential solution to this problem. This is joint work led by my hardworking students Ian Miers and Christina Garman, along with my colleague Avi Rubin. Our proposal is called Zerocoin, and we’ll be presenting it at this year’s IEEE S&P.

For those who just want the TL;DR, here it is:

Zerocoin is a new cryptographic extension to Bitcoin that (if adopted) would bring true cryptographic anonymity to Bitcoin. It works at the protocol level and doesn’t require new trusted parties or services. With some engineering, it might (someday) turn Bitcoin into a completely untraceable, anonymous electronic currency.

In the rest of the post I’m going to explain Zerocoin, what it can do for Bitcoin, and how far away that ‘someday‘ might be. This is going to be a long, wonky post, so I won’t be offended if you stop here and take my word that it’s all true.

For everyone else, strap in. I need to start with some background.

Bitcoin in 300 words

Before I get to Zerocoin I need to give the world’s shortest explanation of how Bitcoin works. (See here for a slightly less terrible explanation.)

At its heart, Bitcoin is a transaction network with a distributed public ledger. Transactions are files that contains messages like “User X transfers 3 bitcoins to user Y” and “User Y transfers 2.5 of those bitcoins to user Z”. Users aren’t identified by name. Instead, their identities are public keys for a digital signature scheme.* This allows users to sign their transactions, and makes it very difficult to forge them.

Now none of this stuff is really new. What makes Bitcoin special is the way it maintains the transaction ledger. Rather than storing the whole thing on a single computer, the ledger — called a block chain — is massively replicated and updated by a swarm of mutually distrustful parties running in a peer-to-peer network.

To make this work, nodes pull transactions off of a peer-to-peer broadcast network, then compete for the opportunity to tack them on the end of the chain. To keep one party from dominating this process (and posting bad transations), competition is enforced by making the parties solve hard mathematical problems called ‘proofs of work‘. The integrity of the block chain is enforced using hash chaining, which makes it very difficult to change history.

Now the block chain is fascinating, and if you’re interested in the gory details you should by all means see here. This post mostly isn’t going to get into it. For now all you need to know is that the block chain works like a global ledger. It’s easy to add (valid) transactions at the end, but it’s astonishingly difficult to tamper with the transactions that are already there.

So what’s the problem?

The block chain is Bitcoin’s greatest strength. Unfortunately from a privacy perspective, it’s also the currency’s greatest weakness.

This is because the block chain contains a record of every single Bitcoin transaction that’s ever been conducted. Due to the way Bitcoin works, this information can’t be limited to just a few trustworthy parties, since there are no trusted parties. This means all of your transactions are conducted in public.

Illustration of a Bitcoin block chain. Each transaction is tied to the one that precedes it.
The transaction at far left is almost certainly a drug deal.

In a sense this makes Bitcoin less private than cash, and even worse than credit cards. If you choose to engage in sensitive transactions on Bitcoin, you should be aware that a record will be preserved for all eternity. Spend with care.

Now some will say this is unfair, since Bitcoin users are not identified by name — the only identifier associated with your transactions is your public key. Moreover, you can make as many public keys as you’d like. In other words, Bitcoin offers privacy through pseudonymity, which some argue is almost as good as the real thing.

But don’t get too comfortable. Already several academic works have succeeded in de-anonymizing Bitcoin transactions. And this work is just getting started. You see, there’s an entire subfield of computer science that can roughly be described as ‘pulling information out of things that look exactly like the Bitcoin transaction graph’, and while these researchers haven’t done much to Bitcoin yet — that’s only because they’re still fighting over the grant money. We will see more.

If you’re a Bitcoin user who values your privacy, this should worry you. The worst part is that right now your options are somewhat limited. Roughly speaking, they are:

Just be careful. Generate lots of public keys and make sure your client software is extremely careful not to use them in ways that could tie one to another (e.g., getting ‘change’ from one key sent to another). This seems to be the major privacy thrust of the current Bitcoin development effort, and we’re all waiting to see how it pans out.

Use a laundry. For the more paranoid, there are services called ‘laundries that take in bitcoins from a whole bunch of users, mix them up and shuffle them back out. In theory this makes it hard to track your money. Unfortunately, laundries suffer from a few problems. First, they only work well if lots of people are using them, and today’s laundries have relatively low volume. More importantly, you’re entirely dependent on the honesty and goodwill of the laundry itself. A dishonest (or hacked) laundry can steal your coins, or even trace its inputs and outputs — which could completely undermine your privacy.

Use a Chaumian e-cash system. On the wonkier side, there have been attempts to implement real anonymous cryptographic e-Cash for Bitcoin. I’ve written about these systems before and while I think they’re neat, the existing schemes (from Chaum on forward) have one critical flaw: they all rely on a central ‘bank’ to issue and redeem e-Cash tokens. The need for this bank has been a major stumbling block in getting these systems up and running, and it’s almost unworkable for Bitcoin — since trusted parties are antithetical to Bitcoin’s decentralized nature.

In short, the current solutions aren’t perfect. It would be awfully nice if we had something better. Something with the power of cryptographic e-Cash, but without the need to change Bitcoin’s network model. And this is where Zerocoin comes in.

Zerocoin

Zerocoin is not intended as a replacement for Bitcoin. It’s actually a separate anonymous currency that’s designed to live side-by-side with Bitcoin on the same block chain. Zerocoins are fully exchangeable on a one-to-one basis with bitcoins, which means (in principle) you can use them with existing merchants.

Zerocoins themselves can be thought of literally as coins. They’re issued in a fixed denomination (for example, 1 BTC), and any user can purchase a zerocoin in exchange for the correct quantity of bitcoin. This purchase is done by placing a special new ‘Zerocoin Mint’ transaction onto the block chain.

Once a Mint transaction has been accepted by the Bitcoin peers, the same user can later redeem her zerocoin back into bitcoins. She simply embeds a (preferably new) destination Bitcoin address into a ‘Zerocoin Spend’ transaction, then sends it into the network. If the transaction checks out, the Bitcoin peers will treat it just like a normal Bitcoin transfer — meaning that she’ll receive the full bitcoin value of the coin (minus transaction fees) at the destination address.

Now you’re probably wondering what any of this has to do with privacy. To explain that, I need to give you one more piece of information:

Aside from educated guesswork, there’s no way to link a Zerocoin Mint transaction to the Zerocoin Spend transaction that redeems it.

Redeeming a zerocoin gives you a completely different set of bitcoins than the ones you used to purchase it. In fact, you can think of Zerocoin like the world’s biggest laundry — one that can handle millions of users, has no trusted party, and can’t be compromised. Once as user converts her bitcoins into zerocoins, it’s very hard to determine where she took them back out. Their funds are mixed up with all of the other users who also created zerocoins. And that’s a pretty powerful guarantee.

Illustration of a Bitcoin/Zerocoin block chain. A user transforms bitcoins into a zerocoin,
then (at some unspecified later point) ‘Spends’ it to redeem the bitcoins. The linkage between Mint
and Spend (dotted line) cannot be determined from the block chain data.

The key to the whole process is to make it all work at the protocol level — meaning, without adding new trusted parties. And doing that is the goal of Zerocoin.

How does it work?

Zerocoin uses a combination of digital commitments, one-way accumulators and zero-knowledge proofs, and some extensions to the existing Bitcoin protocol. It also shares some similarities to a previous work by Sander and Ta-Shma. For the details, you can see our paper. Here I’m going to try to give a very high-level intuition that avoids the muck.

The key idea in Zerocoin is that each coin commits to (read: encrypts) a random serial number. These coins are easy to create — all you need to do is pick the serial number and run a fast commitment algorithm to wrap this up in a coin. The commitment works like encryption, in that the resulting coin completely hides the serial number . At the same time this coin ‘binds’ you to the number you’ve chosen. The serial number is secret, and it stays with you.

To ‘Mint’ the new coin you post it to the network along with a standard Bitcoin transaction containing enough (normal) bitcoins to ‘pay for’ it. The Mint transaction adds some new messages to the Bitcoin protocol, but fundamentally there’s no magic here. The Bitcoin network will accept the transaction into the block chain as long as the input bitcoins check out.**

The Zerocoin ‘Spend’ transaction is a little bit more complicated. To redeem your zerocoin, you first create a new transaction that contains the coin’s serial number (remember that you kept it secret after you made the coin). You also attach a zero-knowledge proof of the following two statements:

  1. You previously posted a valid zerocoin on the block chain.
  2. This particular zerocoin contained the serial number you put in your transaction.
The key to making this all work is that zero-knowledge proof. What you need to know about these is that anyone can verify such a proof, and she’ll be absolutely convinced that you’re telling the truth about these statements. At the same time, the proof reveals absolutely no other information (hence the ‘zero’ knowledge).
This means anyone who sees your Spend transaction will be convinced that you really did previously Mint a zerocoin, and that it contained the serial number you just revealed. They can then check the block chain to make sure that particular serial number has never been Spent before. At the same time, the zero knowledge property ensures that they they have absolutely no idea which zerocoin you’re actually spending. The number of such coins could easily run into the millions.
All of this leads us to one final question: where do your bitcoins go after you Mint a zerocoin, and how do you get them back when you Spend?
The simple answer is that they don’t go anywhere at all. The bitcoins used in a ‘Mint’ transaction just sit there on the block chain. The Zerocoin protocol semantics require that nobody can access those coins again except by publishing a valid Zerocoin ‘Spend’. When you publish a Spend, the protocol allows you to ‘claim’ any of the previously-committed bitcoins — regardless of who posted them. In other words, you Mint with one set of bitcoins, and you leave with someone else’s.

When will Zerocoin be available?

For those looking to use Zerocoin tomorrow, I would advise patience. We’ve written a proof-of-concept implementation that extends the C++ bitcoind client to support Zerocoin, and we’ll be releasing a cleaned up version of our code when we present the paper in May. (Update: available here.)

But before you get excited, I need to point out some pretty serious caveats.

First of all, Zerocoin is not cheap. Our current zero-knowledge proof averages around 40KB, and take nearly two seconds to verify. By the standards of advanced crypto primitives this is fantastic. At the same time, it poses some pretty serious engineering challenges — not least of which is: where do you store all these proofs?

This probably isn’t the end of the world. For one thing, it seems likely that we’ll be able to reduce the size and cost of verifying the proof, and we think that even the current proof could be made to work with some careful engineering. Still, Zerocoin as currently construed is probably not going to go online anytime soon. But some version of Zerocoin might be ready in the near future.

Another problem with Zerocoin is the difficulty of incrementally deploying it. Supporting the new Mint and Spend functionality requires changes to every Bitcoin client. That’s a big deal, and it’s unlikely that the Bitcoin folks are going to accept a unilateral protocol change without some serious pushback. But even this isn’t a dealbreaker: it should be possible to start Zerocoin off using some training wheels — using a trusted central party to assist with the process, until enough Bitcoin clients trust it and are willing to support it natively.

In fact, one of the biggest barriers to adoption is human beings themselves. As complicated as Bitcoin is, you can explain the crypto even to non-experts. This makes people happy. Unfortunately Zerocoin is a different animal. It will take time to convince people that these new techniques are safe. We hope to be there when it happens.

In conclusion

I realize that this blog post has run slightly longer than usual. Thanks for sticking with me!

As regular readers of this blog know, I have a passion for anything that gets interesting crypto out into the world. Bitcoin is a great example of this. It would be wonderful if this gave us an opportunity to do even more interesting things. Perfectly untraceable e-Cash would definitely fit that bill.

But even if we don’t get there, the fact that reputable computer science conferences are accepting papers about ‘that crazy Bitcoin thing’ tells you a lot about how much it’s grown up. In the long run, this is good news for everyone.

Notes:

* There are a lot of simplifications in here. Identities are the hash of your public key. The block chain is really computed using a Merkle tree for efficiency reasons. The peer-to-peer network isn’t quite a broadcast network. Did I mention you should read the paper?

** Note that for those who ‘know’ their Bitcoin, you can think of the Zerocoin as a piece of extra data that gets added to a Bitcoin transaction. The inputs are still standard bitcoins. There’s just no output. Instead, the transaction contains the coin data.

The Ideal Cipher Model (wonky)

A friend who’s learning cryptography writes with a few questions about block ciphers:

(1) Let’s say we’re using AES-128 — 128 bit keys, 128 bit blocks.

  • For a given 128 bit block of plaintext “P” – if I was to iterate through all 2**128 key permutations and encrypt the same plaintext P with each key, would the outputs all be unique, or would there be collisions?
  • For a given 128 bit key “K”, if I was to use K to encrypt every possible (2**128) plaintext, would the outputs all be unique, or would there be collisions?
These are all reasonable questions with simple answers. But I’m not going to give them. Why bother with two simple answers when we can give one really complicated intuition?

What’s Claude Shannon got to do with it?

Back in the late 1940s when people were still thinking on this whole information theory business, a gentleman named Claude Shannon came up with a model for what a block cipher should do. His idea was — and yes, I’m embellishing a lot here — that an ‘ideal’ block cipher would work kind of like a magical elf.*

You could ask the elf to encipher and decipher things for you. But instead of using a mathematical algorithm, it would just keep a blackboard with a big table. The table would have three columns (Key, Plaintext, Ciphertext), and it would start off empty.

When you asked the elf to encipher a plaintext P under a key K, it would do the following:
  1. Check to see if (K, P) were already recorded in some row of the table. If they were, it would read off the corresponding ciphertext value C from the same row, and return C.
  2. If no matching row was found, it would generate a perfectly random ciphertext C.
  3. It would then check for an existing table entry containing the same key and ciphertext (K, C). If this entry was found in the table, it would throw C away and repeat step (2).
  4. Otherwise it would add (K, P, C) to the table and return C to the caller.

Now I realize this looks complicated, but if you think about this for a little while you’ll realize that this little thought experiment does ‘work’ a lot like a real block cipher.

Just like a block cipher, it will always give the same output when you pass it a given key and plaintext. Furthermore — for a given key K, no two plaintexts will ever encipher to the same ciphertext. This models the fact that a block cipher is a permutation.

Lastly, the output of the encipherment is very ‘random’ — in the sense that it’s intuitively not linked to the input. Calling the cipher on different inputs should produce values that are very much unrelated.

Of course, you also need the elf to decipher stuff as well. Decipherment works mostly like the process above. When you ask to decipher (K, C), it checks to see whether the given key and ciphertext are already in the table (i.e., they were previously enciphered). If so it looks up and returns the corresponding plaintext. Otherwise it generates a new random plaintext, makes sure it hasn’t previously appeared in the table with that key, and returns the result.

Under the assumption that AES works like our elf, we can now answer my friend’s questions. Clearly if I encrypt the same plaintext with many different keys, I will get many different ciphertexts. They’ll each be random and 128 bits long, which means the probability of any two ciphertexts being the same (colliding) is quite small. But there’s no rule preventing it, and over a large enough set it’s actually quite likely.

Similarly, the second question is answered by the fact that the cipher is a permutation, something that’s captured in our elf’s rule (3).

This is an awful lot of work to answer a few simple questions…

Yes, it certainly is. But the ‘Elf’ model is useful for answering much more interesting questions — like: is a given encryption scheme secure?

For example, take the CBC mode of operation, which is a way to turn a block cipher into an encryption scheme:

CBC encryption takes in a random key (K) and a random ‘Initialization Vector’ (IV), both of which are chosen by the encryptor. Now let’s see what happens if we CBC-encrypt an N-block message using our elf.

  1. The encryptor XORs the first plaintext block with the (random) Initialization Vector (IV), which should give a randomly-distributed value. We’ll call this P’.
  2. He then sends (K, P’) to be enciphered by the elf. Since the elf has never seen (K, P’) — meaning it’s not in the table — it will generate a random value (C1), which will become the first ciphertext block.
  3. The encryptor now XORs the next plaintext block with C1, which again should yield a randomly-distributed value which we’ll call P”.
  4. He sends (K, P”) to be enciphered by the elf. Since P” is also random (and, say, 128 bits long), the probability that it’s already in the table is astronomically low. Thus with high probability the elf will again generate a random value (C2), which becomes the second ciphertext block.
  5. He then repeats steps (3) and (4) for all the remaining blocks.
Note that with overwhelming probability — and unless you encrypt really long ciphertexts** — the elf will generate a random value for each ciphertext block. Hence the entire ciphertext will consist of a random string, which means it really shouldn’t leak anything about the message (except for the length).

Of course, an attacker might also talk to the elf to try to learn something about the message. But note that even this attack isn’t useful unless he can guess the (random) key K, since the elf will give him random results unless he asks for a value that includes K. Since K is a randomly-chosen secret key, the attacker should not be able to guess it.

Do ideal ciphers exist?

Now all of this is well and good, but it leaves us with an important question: is AES actually as good as an ideal cipher? Unfortunately, the resounding answer to this question is no.

The problem here is that ideal ciphers are totally unworkable. Obviously we can’t have an actual elf randomly filling in a bulletin board as we encrypt things. I want to carry a copy of the block cipher around with me (in software or hardware). I also want my copy of the cipher to be consistent with your copy, so that I can send you messages and you can decrypt them.

To make the elf idea work, we would each need to carry around a copy of the elf’s table that’s completely filled in from the start — meaning it would already contain entries for every possible plaintext (resp. ciphertext) and key I might ever need to encipher/decipher. When you consider that there would be 2^128 rows just for a single key, you realize that, no, this is not a question of running out to Best Buy and picking up some more SD cards. It’s fundamentally hard.

So carrying ideal ciphers around is not possible. The question then is: is there something that’s as good as an ideal cipher, without requiring us to carry around an exponentially-sized table.

The answer to that question is a mixed bag. The bad news is that nothing really works as well as an ideal cipher. Worse yet, there exists schemes that would be provably secure with an ideal cipher, but would fail catastrophically if you implemented them with any real block cipher. So that sucks.

On the other hand, those are theoretical results. Unless you’re doing some very specific things, the ideal cipher model is a moderately helpful tool for understanding what block ciphers are capable of. It’s also a good jumping off point for understanding the real proofs we actually use for modes like CBC. These proofs use more ‘realistic’ assumptions, e.g., that the block cipher is a pseudorandom permutation.

But those proofs will have to wait for another time. I’ve reached my wonkery limit for today.

Notes:

* For the record, at no point did Claude Shannon ever refer to an ‘elf’. At least not in writing.

** The probability of a ‘collision’ (i.e., asking the elf to encipher a value that’s already been enciphered) goes up as you put encipher more blocks. At a certain point (for AES, close to 2^64 blocks) it becomes quite high. Not coincidentally, this roughly coincides with the number of blocks NIST says you should encrypt with CBC mode.

Attack of the week: RC4 is kind of broken in TLS

Update: I’ve added a link to a page at Royal Holloway describing the new attack. 

Listen, if you’re using RC4 as your primary ciphersuite in SSL/TLS, now would be a great time to stop. Ok, thanks, are we all on the same page?

No?

I guess we need to talk about this a bit more. You see, these slides have been making the rounds since this morning. Unfortunately, they contain a long presentation aimed at cryptographers, and nobody much seems to understand the real result that’s buried around slide 306 (!). I’d like to help.

Here’s the short summary:

According to AlFardan, Bernstein, Paterson, Poettering and Schuldt (a team from Royal Holloway, Eindhoven and UIC) the RC4 ciphersuite used in SSL/TLS is broken. If you choose to use it — as do a ridiculous number of major sites, including Google — then it may be possible for a dedicated attacker to recover your authentication cookies. The current attack is just on the edge of feasibility, and could probably be improved for specific applications.

This is bad and needs to change soon.

In the rest of this post I’ll cover the details in the usual ‘fun’ question-and-answer format I save for these kinds of attacks.

What is RC4 and why should I care?

RC4 is a fast stream cipher invented in 1987 by Ron Rivest. If you like details, you can see this old post of mine for a longwinded discussion of RC4’s history and flaws. Or you can take my word: RC4 is old and crummy.

Despite its age and crumminess, RC4 has become an extremely popular ciphersuite for SSL/TLS connections — to the point where it accounts for something absurd like half of all TLS traffic. There are essentially two reasons for this:

  1. RC4 does not require padding or IVs, which means it’s immune to recent TLS attacks like BEAST and Lucky13. Many admins have recommended it as the solution to these threats.
  2. RC4 is pretty fast. Faster encryption means less computation and therefore lower hardware requirements for big service providers like Google.

So what’s wrong with RC4?

Like all stream ciphers, RC4 takes a short (e.g., 128-bit) key and stretches it into a long string of pseudo-random bytes. These bytes are XORed with the message you want to encrypt, resulting in what should be a pretty opaque (and random-looking) ciphertext.

The problem with RC4 is that the above statement is not quite true. The bytes come out of the RC4 aren’t quite random looking — they have small biases. A few of these biases have been known for years, but weren’t considered useful. However, recent academic work has uncovered many small but significant additional biases running throughout the first 256 bytes of RC4 output. This new work has systematically identified many more.

At first blush this doesn’t seem like a big deal. Cryptographically, however, it’s a disaster. If I encrypt the same message (plaintext) with many different RC4 keys, then I should get a bunch of totally random looking ciphertexts. But these tiny biases mean that they won’t be random — a statistical analysis of different positions of the ciphertext will show that some values occur more often.

By getting many different encryptions of the same message — under different keys — I can tally up these small deviations from random and thus figure out what was encrypted.

If you like analogies, think of it like peeking at a picture with your hands over your eyes. By peeking through your fingers you might only see a tiny sliver of the scene you’re looking at. But by repeating the observation many times, you may be able to combining those slivers and figure out what you’re looking at.

Why would someone encrypt the same message many times?

The problem here (as usual) is your browser.

You see, there are certain common elements that your browser tends to send at the beginning of every HTTP(S) connection. One of these values is a cookie — typically a fixed string that identifies you to a website. These cookies are what let you log into Gmail without typing your password every time.

If you use HTTPS (which is enforced in many sites by default), then your cookies should be safe. After all, they’ll always be sent over an encrypted connection to the website.Unfortunately, if your connection is encrypted using RC4 (as is the case with Gmail), then each time you make a fresh connection to the Gmail site, you’re sending a new encrypted copy of the same cookie. If the session is renegotiated (i.e., uses a different key) between those connections, then the attacker can build up the list of ciphertexts he needs.

To make this happen quickly, an attacker can send you a piece of Javascript that your browser will run — possibly on a non-HTTPS tab. This Javascript can then send many HTTPS requests to Google, ensuring that an eavesdropper will quickly build up thousands (or millions) of requests to analyze.

Probability of recovering a plaintext byte (y axis) at a particular byte position in the RC4-encrypted stream (x axis), assuming 2^24 ciphertexts. Increasing the number of ciphertexts to 2^32 gives almost guaranteed plaintext recovery at all positions. (source)

Millions of ciphertexts? Pah. This is just crypto-wankery.

It’s true that the current attack requires an enormous number of TLS connections — in the tens to hundreds of millions — which means that it’s not likely to bother you. Today. On the other hand, this is the naive version of the attack, without any special optimizations.

For example, the results given by Bernstein assume no prior plaintext knowledge. But in practice we often know a lot about the structure of an interesting plaintext like a session cookie. For one thing, they’re typically Base64 encoded — reducing the number of possibilities for each byte — and they may have some recognizable structure which can be teased out using advanced techniques.

It’s not clear what impact these specific optimizations will have, but it tends to reinforce the old maxim: attacks only get better, they never get worse. And they can get a lot better while you’re arguing about them.

So what do we do about it?

Treat this as a wakeup call. Sites (like Google) need to stop using RC4 — before these attacks become practical. History tells us that nation-state attackers are already motivated to penetrate Gmail connections. The longer we stick with RC4, the more chances we’re giving them.

In the short term we have an ugly choice: stick with RC4 and hope for the best, or move back to CBC mode ciphersuites — which many sites have avoided due to the BEAST and Lucky13 attacks.

We could probably cover ourselves a bit by changing the operation of browsers, so as to detect and block Javascript that seems clearly designed to exercise these attacks. We could also put tighter limits on the duration and lifespan of session cookies. In theory we could also drop the first few hundred bytes of each RC4 connection — something that’s a bit of a hack, and difficult to do without breaking the TLS spec.

In the longer term, we need to move towards authenticated encryption modes like the new AEAD TLS ciphersuites, which should put an end to most of these problems. The problem is that we’ll need browsers to support these modes too, and that could take ages.

In summary

I once made fun of Threatpost for sounding alarmist about SSL/TLS. After all, it’s not like we really have an alternative. But the truth is that TLS (in its design, deployment and implementation) needs some active help. We’re seeing too many attacks, and they’re far too close to practical. Something needs to give.

We live in a world where NIST is happy to give us a new hash function every few years. Maybe it’s time we put this level of effort and funding into the protocols that use these primitives? They certainly seem to need it.