How do we build encryption backdoors?

(photo source/cc)

They say that history repeats itself, first as tragedy, then as farce. Never has this principle been more apparent than in this new piece by Washington Post reporters Ellen Nakashima and Barton Gellman: ‘As encryption spreads, U.S. grapples with clash between privacy, security‘.

The subject of the piece is a renewed effort by U.S. intelligence and law enforcement agencies to mandate ‘backdoors’ in modern encryption systems. This is ostensibly a reaction to the mass adoption of strong encryption in smartphones, and a general fear that police are about to lose wiretapping capability they’ve come to depend on.

This is not the first time we’ve been here. Back in the 1990s the Federal government went as far as to propose a national standard for ‘escrowed’ telephone encryption called the ‘Clipper’ chip. That effort failed in large part because the technology was terrible, but also because — at least at the time — the idea of ordinary citizens adopting end-to-end encryption was basically science fiction.

Thanks to the advent of smartphones and ‘on-by-default’ encryption in popular systems like Apple’s iMessage, and WhatsApp, Americans are finally using end-to-end encryption at large scale and seem to like it. This is scaring the bejesus out of the powers that be.

Hence crypto backdoors.

As you might guess, I have serious philosophical objections to the idea of adding backdoors to any encryption system — for excellent reasons I could spend thousands of words on. But I’m not going to do that. What I’d like to do in this post is put aside my own value judgements and try to take these government proposals at face value.

Thus the question I’m going to consider in this post:

Let’s pretend that encryption backdoors are a great idea. From a purely technical point of view, what do we need to do to implement them, and how achievable is it?

First some background.

End-to-end encryption 101

Modern encryption schemes break down into several categories. For the purposes of this discussion we’ll consider two: those systems for which the provider holds the key, and the set of systems where the provider doesn’t.

We’re not terribly interested in the first type of encryption, which includes protocols like SSL/TLS and Google Hangouts, since those only protect data at the the link layer, i.e., until it reaches your provider’s servers. I think it’s fairly well established that if Facebook, Apple, Google or Yahoo can access your data, then the government can access it as well — simply by subpoenaing or compelling those companies. We’ve even seen how this can work.

The encryption systems we’re interested all belong to the second class — protocols where even the provider can’t decrypt your information. This includes:

This seems like a relatively short list, but in practice w’re talking about an awful lot of data. The iMessage and WhatsApp systems alone process billions of instant messages every day, and Apple’s device encryption is on by default for everyone with a recent(ly updated) iPhone.

How to defeat end-to-end encryption

If you’ve decided to go after end-to-end encryption through legal means, there are a relatively small number of ways to proceed.

By far the simplest is to simply ban end-to-end crypto altogether, or to mandate weak encryption. There’s some precedent for this: throughout the 1990s, the NSA forced U.S. companies to ship ‘export‘ grade encryption that was billed as being good enough for commercial use, but weak enough for governments to attack. The problem with this strategy is that attacks only get better — but legacy crypto never dies.

Fortunately for this discussion, we have some parameters to work with. One of these is that Washington seems to genuinely want to avoid dictating technological designs to Silicon Valley. More importantly, President Obama himself has stated that “there’s no scenario in which we don’t want really strong encryption“. Taking these statements at face value should mean that we can exclude outright crypto bans, mandated designs, and any modification has the effect of fundamentally weakening encryption against outside attackers.

If we mix this all together, we’re left with only two real options:

  1. Attacks on key distribution. In systems that depend on centralized, provider-operated key servers, such as WhatsApp, Facetime, Signal and iMessage,** governments can force providers to distribute illegitimate public keys, or register shadow devices to a user’s account. This is essentially a man-in-the-middle attack on encrypted communication systems.
  2. Key escrow. Just about any encryption scheme can be modified to encrypt a copy of a decryption (or session) key such that a ‘master keyholder’ (e.g., Apple, or the U.S. government) can still decrypt. A major advantage is that this works even for device encryption systems, which have no key servers to suborn.

Each approach requires some modifications to clients, servers or other components of the system.

Attacking key distribution

Key lookup request for Apple iMessage. The phone
number is shown at top right, and the response at bottom left.

Many end-to-end encrypted messaging systems, including WhatsApp and iMessage, generate a long-term public and secret keypair for every device you own. The public portion of this keypair is distributed to anyone who might want to send you messages. The secret key never leaves the device.

Before you can initiate a connection with your intended recipient, you first have to obtain a copy of the recipient’s public key. This is commonly handled using a key server that’s operated by the provider. The key server may hand back one, or multiple public keys (depending on how many devices you’ve registered). As long as those keys all legitimately belong to your intended recipient, everything works fine.

Intercepting messages is possible, however, if the provider is willing to substitute its own public keys — keys for which it (or the government) actually knows the secret half. In theory this is relatively simple — in practice it can be something of a bear, due to the high complexity of protocols such as iMessage.

Key fingerprints.

The main problem with key distribution attacks is — unlike a traditional wiretap — substitute keys are at least in theory detectable by the target. Some communication systems, like Signal, allow users to compare key fingerprints in order to verify that each received the right public key. Others, like iMessage and WhatsApp, don’t offer this technology — but could easily be modified to do so (even using third party clients). Systems like CONIKS may even automate this process in the future — allowing applications to monitor changes to their own keys in real time as they’re distributed by a server.

A final, and salient feature on the key distribution approach is that it allows only prospective eavesdropping — that is, law enforcement must first target a particular user, and only then can they eavesdrop on her connections. There’s no way to look backwards in time. I see this is a generally good thing. Others may disagree.

Key Escrow 


Structure of the Clipper ‘LEAF’.

The techniques above don’t help much for systems without public key servers, Moreover, they do nothing for systems that don’t use public keys at all, the prime example being device encryptionIn this case, the only real alternative is to mandate some sort of key escrow.

Abstractly, the purpose of an escrow system is to place decryption keys on file (‘escrow’ them) with some trusted authority, who can break them out when the need arises. In practice it’s usually a bit more complex.

The first wrinkle is that modern encryption systems often feature many decryption keys, some of which can be derived on-the-fly while the system operates. (Systems such as TextSecure/WhatsApp actually derive new encryption keys for virtually every message you send.) Users with encrypted devices may change their password from time to time.

To deal with this issue, a preferred approach is to wrap these session keys up (encrypt them) under some master public key generated by the escrow authority — and to store/send the resulting ciphertexts along with the rest of the encrypted data. In the 1990s Clipper specification these ciphertexts were referred to as Law Enforcement Access Fields, or LEAFs.***

With added LEAFs in your protocol, wiretapping becomes relatively straightforward. Law enforcement simply intercepts the encrypted data — or obtains it from your confiscated device — extract the LEAFs, and request that the escrow authority decrypt them. You can find variants of this design dating back to the PGP era. In fact, the whole concept is deceptively simple — provided you don’t go farther than the whiteboard. 

Conceptual view of some encrypted data (left) and a LEAF (right).

It’s only when you get into the details of actually implementing key escrow that things get hairy. These schemes require you to alter every protocol in your encryption system, at a pretty fundamental level — in the process creating the mother of all security vulnerabilities — but, more significantly, they force you to think very seriously about who you trust to hold those escrow keys.

Who does hold the keys?

This is the million dollar question for any escrow platform. The Post story devotes much energy to exploring various proposals for doing this.

Escrow key management is make-or-break, since the key server represents a universal vulnerability in any escrowed communication system. In the present debate there appear to be two solutions on the table. The first is to simply dump the problem onto individual providers, who will be responsible for managing their escrow keys — using whatever technological means they deem appropriate. A few companies may get this right. Unfortunately, most companies suck at cryptography, so it seems reasonable to believe that the resulting systems will be quite fragile.

The second approach is for the government to hold the keys themselves. Since the escrow key is too valuable to entrust to one organization, one or more trustworthy U.S. departments would hold ‘shares‘ of the master key, and would cooperate to provide decryption on a case-by-case basis. This was, in fact, the approach proposed for the Clipper chip.

The main problem with this proposal is that it’s non-trivial to implement. If you’re going to split keys across multiple agencies, you have to consider how you’re going to store those keys, and how you’re going to recover them when you need to access someone’s data. The obvious approach — bring the key shares back together at some centralized location — seems quite risky, since the combined master key would be vulnerable in that moment.

A second approach is to use a threshold cryptosystem. Threshold crypto refers to a set of techniques for storing secret keys across multiple locations so that decryption can be done in place without recombining the key shares. This seems like an ideal solution, with only one problem: nobody has deployed threshold cryptosystems at this kind of scale before. In fact, many of the protocols we know of in this area have never even been implemented outside of the research literature. Moreover, it will require governments to precisely specify a set of protocols for tech companies to implement — this seems incompatible with the original goal of letting technologists design their own systems.

Software implementations

A final issue to keep in mind is the complexity of the software we’ll need to make all of this happen. Our encryption software is already so complex that it’s literally at the breaking point. (If you don’t believe me, take a look at OpenSSL’s security advisories for the last year) While adding escrow mechanisms seems relatively straightforward, it will actually require quite a bit of careful coding, something we’re just not good at.

Even if we do go forward with this plan, there are many unanswered questions. How widely can these software implementations be deployed? Will every application maker be forced to use escrow? Will we be required to offer a new set of system APIs in iOS, Windows and Android that we can use to get this right? Answering each of these questions will result in dramatic changes throughout the OS software stack. I don’t envy the poor developers who will have to answer them.

How do we force people to use key escrow?

Leaving aside the technical questions, the real question is: how do you force anyone to do this stuff? Escrow requires breaking changes to most encryption protocols; it’s costly as hell; and it introduces many new security concerns. Moreover, laws outlawing end-to-end encryption software seem destined to run afoul of the First Amendment.

I’m not a lawyer, so don’t take my speculation too seriously — but it seems intuitive to me that any potential legislation will be targeted at service providers, not software vendors or OSS developers. Thus the real leverage for mandating key escrow will apply to the Facebooks and Apples of the world. Your third-party PGP and OTR clients would be left alone, for the tiny percentage of the population who uses these tools.

Unfortunately, even small app developers are increasingly running their own back-end servers these days (e.g., Whisper Systems and Silent Circle) so this is less reassuring than it sounds. Probably the big takeaway for encryption app developers is that it might be good to think about how you’ll function in a world where it’s no longer possible to run your own back-end data transport service — and where other commercial services may not be too friendly to moving your data for you.

In conclusion

If this post has been more questions than answers, that’s because there really are no answers right now. A serious debate is happening in an environment that’s almost devoid of technical input, at least from technical people who aren’t part of the intelligence establishment.

And maybe that by itself is reason enough to be skeptical.

Notes:

* Not an endorsement. I have many thoughts on Telegram’s encryption protocols, but they’re beyond the scope of this post.

** Telegram is missing from this list because their protocol doesn’t handle long term keys at all. Every single connection must be validated in person using a graphical key fingerprint, which is, quite frankly, terrible.

*** The Clipper chip used a symmetric encryption algorithm to encrypt the LEAF, which meant that the LEAF decryption key had to be present inside of every consumer device. This was completely nuts, and definitely a bullet dodged. It also meant that every single Clipper had to be implemented in hardware using tamper resistant chip manufacturing technology. It was a truly awful design.

How to paint yourself into a corner (Lenovo edition)

The information security news today is all about Lenovo’s default installation of a piece of adware called “Superfish” on a number of laptops shipped before February 2015. The Superfish system is essentially a tiny TLS/SSL “man in the middle” proxy that attacks secure connections by making them insecure — so that the proxy can insert ads in order to, oh, I don’t know, let’s just let Lenovo tell it:

“To be clear, Superfish comes with Lenovo consumer products only and is a technology that helps users find and discover products visually,” the representative continued. “The technology instantly analyses images on the web and presents identical and similar product offers that may have lower prices, helping users search for images without knowing exactly what an item is called or how to describe it in a typical text-based search engine.”

Whatever.

The problem here is not just that this is a lousy idea. It’s that Lenovo used the same certificate on every single Laptop it shipped with Superfish. And since the proxy software also requires the corresponding private key to decrypt and modify your web sessions, that private key was also shipped on every laptop. It took all of a day for a number of researchers to find that key and turn themselves into Lenovo-eating interception proxies. This sucks for Lenovo users.

If you’re a Lenovo owner in the affected time period, go to this site to find out if you’re vulnerable and (hopefully) what to do about it. But this isn’t what I want to talk about in this post.

Instead, what I’d like to discuss is some of the options for large-scale automated fixes to this kind of vulnerability. It’s quite possible that Lenovo will do this by themselves — pushing an automated patch to all of their customers to remove the product — but I’m not holding my breath. If Lenovo does not do this, there are roughly three options:

  1. Lenovo users live with this and/or manually patch. If the patch requires manual effort, I’d estimate it’ll be applied to about 30% of Lenovo laptops. Beware: the current uninstall package does not remove the certificate from the root store!
  2. Microsoft drops the bomb. Microsoft has a nuclear option themselves in terms of cleaning up nasty software — they can use the Windows Update mechanism or (less universally) the Windows Defender tool to remove spyware/adware. Unfortunately not everyone uses Defender, and Microsoft is probably loath to push out updates like this without massive testing and a lot of advice from the lawyers.
  3. Google and Mozilla fix internally. This seems like a more promising option. Google Chrome in particular is well known for quickly pushing out security updates that revoke keys, add public key pins, and generally make your browsing experience more secure.

It seems unlikely that #1 and #2 will happen anytime soon, so the final option looks initially like the most promising. Unfortunately it’s not that easy. To understand why, I’m going to sum up some reasoning given to me (on Twitter) by a couple of members of the Chrome security team.

The obvious solution to fixing things at the Browser level is to have Chrome and/or Mozilla push out an update to their browsers that simply revokes the Superfish certificate. There’s plenty of precedent for that, and since the private key is now out in the world, anyone can use it to build their own interception proxy. Sadly, this won’t work! If Google does this, they’ll instantly break every Lenovo laptop with Superfish still installed and running. That’s not nice, or smart business for Google.

A more promising option is to have Chrome at least throw up a warning whenever a vulnerable Lenovo user visits a page that’s obviously been compromised by a Superfish certificate. This would include most (secure) sites any Superfish-enabled Lenovo user visits — which would be annoying — and just a few pages for those users who have uninstalled Superfish but still have the certificate in their list of trusted roots.

This seems much nicer, but runs into two problems. First, someone has to write this code — and in a hurry, because attacks may begin happening immediately. Second, what action item are these warnings going to give people? Manually uninstalling certificates is hard, and until a very nice tool becomes available a warning will just be an irritation for most users.

One option for Google is to find a way to deal with these issues systemically — that is, provide an option for their browser to tunnel traffic through some alternative (secure) protocol to a proxy, where it can then go securely to its location without being molested by Superfish attackers of any flavor. This would obviously require consent by the user — nobody wants their traffic being routed through Google otherwise. But it’s at least technically feasible.

Google even has an extension for Android/iOS that works something like this: it’s a compressing proxy extension that you can install in Chrome. It will shrink your traffic down and send it to a proxy (presumably at Google). Unfortunately this proxy won’t work even if it was available for Windows machines — because Superfish will likely just intercept its connections too 😦

So that’s out too, and with it the last obvious idea I have for dealing with this in a clean, automated way. Hopefully the Google team will keep going until they find a better solution.

The moral of this story, if you choose to take one, is that you should never compromise security for the sake of a few bucks — because security is so terribly, awfully difficult to get back.

Hopefully the last post I’ll ever write on Dual EC DRBG

I’ve been working on some other blog posts, including a conclusion of (or at least an installment in) this exciting series on zero knowledge proofs. That’s coming soon, but first I wanted to take a minute to, well, rant.

The subject of my rant is this fascinating letter authored by NSA cryptologist Michael Wertheimer in February’s Notices of the American Mathematical Society. Dr. Wertheimer is currently the Director of Research at NSA, and formerly held the position of Assistant Deputy Director and CTO of the Office of the Director of National Intelligence for Analysis.

In other words, this is a guy who should know what he’s talking about.

The subject of Dr. Wertheimer’s letter is near and dear to my heart: the alleged subversion of NIST’s standards for random number generation — a subversion that was long suspected and apparently confirmed by classified documents leaked by Edward Snowden. The specific algorithm in question is called Dual EC DRBG, and it very likely contains an NSA backdoor. Those who’ve read this blog should know that I think it’s as suspicious as a three dollar bill.

Reading Dr. Wertheimer’s letter, you might wonder what I’m so upset about. On the face of it, the letter appears to express regret. To quote (with my emphasis):

With hindsight, NSA should have ceased supporting the Dual_EC_DRBG algorithm immediately after security researchers discovered the potential for a trapdoor. In truth, I can think of no better way to describe our failure to drop support for the Dual_EC_DRBG algorithm as anything other than regrettable. The costs to the Defense Department to deploy a new algorithm were not an adequate reason to sustain our support for a questionable algorithm. Indeed, we support NIST’s April 2014 decision to remove the algorithm. Furthermore, we realize that our advocacy for the Dual_EC_DRBG casts suspicion on the broader body of work NSA has done to promote secure standards. 

I agree with all that. The trouble is that on closer examination, the letter doesn’t express regret for the inclusion of Dual EC DRBG in national standards. The transgression Dr. Wertheimer identifies is merely that NSA continued to support the algorithm after major questions were raised. That’s bizarre.

Even worse, Dr. Wertheimer reserves a substantial section of his letter for a defense of the decision to deploy Dual EC. It’s those points that I’d like to address in this post.

Let’s take them one at a time.

1: The Dual_EC_DRBG was one of four random number generators in the NIST standard; it is neither required nor the default.

It’s absolutely true that Dual EC was only one of four generators in the NIST standard. It was not required for implementers to use it, and in fact they’d be nuts to use it — given that overall it’s at least two orders of magnitude slower than the other proposed generators.

The bizarre thing is that people did indeed adopt Dual EC in major commercial software packages. Specifically, RSA Security included it as the default generator in their popular BSAFE software library. Much worse, there’s evidence that RSA was asked to do this by NSA, and were compensated for their compliance.

This is the danger with standards. Once NIST puts its seal on an algorithm, it’s considered “safe”. If the NSA came to a company and asked it to use some strange, non-standard algorithm, the request would be considered deeply suspicious by company and customers alike. But how can you refuse to use a standard if your biggest client asks you to? Apparently RSA couldn’t.

2: The NSA-generated elliptic curve points were necessary for accreditation of the Dual_EC_DRBG but only had to be implemented for actual use in certain DoD applications.

This is a somewhat misleading statement, one that really needs to be unpacked.

First, the original NSA proposal of Dual EC DRBG contained no option for alternate curve points. This is an important point, since its the selection of curve points that give Dual EC its potential for a “back door”. By generating two default points (P, Q) in a specific way, the NSA may have been able to create a master key that would allow them to very efficiently decrypt SSL/TLS connections.

If you like conspiracy theories, here’s what NIST’s John Kelsey was told when he asked how the NSA’s points were generated:

In 2004-2005, several participants on the ANSI X9 tools committee pointed out the potential danger of this backdoor. One of them even went so far as to file a patent on using the idea to implement key escrow for SSL/TLS connections. (It doesn’t get more passive aggressive than that.)

In response to the discovery of such an obvious flaw, the ANSI X9 committee immediately stopped recommending the NSA’s points — and relegated them to be simply an option, one to be used by the niche set of government users who required them.

I’m only kidding! Actually the committee did no such thing.

Instead, at the NSA’s urging, the ANSI committee retained the original NSA points as the recommended parameters for the standard. It then added an optional procedure for generating alternative points. When NIST later adopted the generator in its SP800-90A standard, it mirrored the ANSI decision. But even worse, NIST didn’t even bother to publish the alternative point generation algorithm. To actually implement it, you’d need to go buy the (expensive) non-public-domain ANSI standard and figure it out to implement it yourself:

This is, to paraphrase Douglas Adams, the standards committee equivalent of putting the details in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard’.

To the best of our knowledge, nobody has ever used ANSI’s alternative generation procedure in a single one of the many implementations of Dual EC DRBG in commercial software.  It’s not even clear how you could have used that procedure in a FIPS-certified product, since the FIPS evaluation process (conducted by CMVP) still requires you to test against the NSA-generated points.

3. The trapdoor concerns were openly studied by ANSI X9F1, NIST, and by the public in 2007. 

This statement has the benefit of being literally true, while also being pretty damned misleading.

It is true that in 2007 — after Dual EC had been standardized — two Microsoft researchers, Dan Shumow and Neils Ferguson openly raised the alarm about Dual EC. The problem here is that the flaws in Dual EC were not first discovered in 2007. They were discovered much earlier in the standardization process and nobody ever heard about them.

As I noted above, the ANSI X9 committee detected the flaws in Dual EC as early as 2004, and in close consultation with NSA agreed to address them — in a manner that was highly beneficial to the NSA. But perhaps that’s understandable, given that the committee was anything but ‘open’.

In fact, this is an important aspect of the controversy that even NIST has criticized. The standardization of these algorithms was conducted through ANSI. And the closed ANSI committee consisted of representatives from a few select companies, NIST and the NSA. No public notice was given of the potential vulnerabilities discovered in the RNG. Moreover, a patent application that might have shone light on the backdoor was mired in NSA pre-publication review for over two years.

This timeline issue might seem academic, but bear this in mind: we now know that RSA Security began using the Dual EC DRBG random number generator in BSAFE — as the default, I remind you — way back in 2004. That means for three years this generator was widely deployed, yet serious concerns were not communicated to the public.

To state that the trapdoor concerns were ‘openly’ studied in 2007 is absolutely true. It’s just completely irrelevant.

In conclusion

I’m not a mathematician, but like anyone who works in a mathematical area, I find there are aspects of the discipline that I love. For me it’s the precision of mathematical statements, and the fact that the truth or falsity of a statement can — ideally — be evaluated from the statement itself, without resorting to differing opinions or understandings of the context.

While Dr. Wertheimer’s letter is hardly a mathematical work, it troubles me to see such confusing statements in a publication of the AMS. As a record of history, Dr. Wertheimer’s letter leaves much to be desired, and could easily lead people to the wrong understanding.

Given the stakes, we deserve a more exact accounting of what happened with Dual EC DRBG. I hope someday we’ll see that.

On the new Snowden documents

If you don’t follow NSA news obsessively, you might have missed yesterday’s massive Snowden document dump from Der Spiegel. The documents provide a great deal of insight into how the NSA breaks our cryptographic systems. I was very lightly involved in looking at some of this material, so I’m glad to see that it’s been published.

Unfortunately with so much material, it can be a bit hard to separate the signal from the noise. In this post I’m going to try to do that a little bit — point out the bits that I think are interesting, the parts that are old news, and the things we should keep an eye on.

Background

Those who read this blog will know that I’ve been wondering for a long time how NSA works its way around our encryption. This isn’t an academic matter, since it affects just about everyone who uses technology today.

What we’ve learned since 2013 is that NSA and its partners hoover up vast amounts of Internet traffic from fiber links around the world. Most of this data is plaintext and therefore easy to intercept. But at least some of it is encrypted — typically protected by protocols such as SSL/TLS or IPsec.

Conventional wisdom pre-Snowden told us that the increasing use of encryption ought to have shut the agencies out of this data trove. Yet the documents we’ve seen so far indicate just the opposite. Instead, the NSA and GCHQ have somehow been harvesting massive amounts of SSL/TLS and IPSEC traffic, and appear to be making inroads into other technologies such as Tor as well.

How are they doing this? To repeat an old observation, there are basically three ways to crack an encrypted connection:

  1. Go after the mathematics. This is expensive and unlikely to work well against modern encryption algorithms (with a few exceptions). The leaked documents give very little evidence of such mathematical breaks — though a bit more on this below.
  2. Go after the implementation. The new documents confirm a previously-reported and aggressive effort to undermine commercial cryptographic implementations. They also provide context for how important this type of sabotage is to the NSA.
  3. Steal the keys. Of course, the easiest way to attack any cryptosystem is simply to steal the keys. Yesterday we received a bit more evidence that this is happening.
I can’t possibly spend time on everything that’s covered by these documents — you should go read them yourself — so below I’m just going to focus on the highlights.

Not so Good Will Hunting

First, the disappointing part. The NSA may be the largest employer of cryptologic mathematicians in the United States, but — if the new story is any indication — those guys really aren’t pulling their weight.

In fact, the only significant piece of cryptanalytic news in the entire stack comes is a 2008 undergraduate research project looking at AES. Sadly, this is about as unexciting as it sounds — in fact it appears to be nothing more than a summer project by a visiting student. More interesting is the context it gives around the NSA’s efforts to break block ciphers such as AES, including the NSA’s view of the difficulty of such cryptanalysis, and confirmation that NSA has some ‘in-house techniques’.

Additionally, the documents include significant evidence that NSA has difficulty decrypting certain types of traffic, including Truecrypt, PGP/GPG, Tor and ZRTP from implementations such as RedPhone. Since these protocols share many of the same underlying cryptographic algorithms — RSA, Diffie-Hellman, ECDH and AES — some are presenting this as evidence that those primitives are cryptographically strong.

As with the AES note above, this ‘good news’ should also be taken with a grain of salt. With a small number of exceptions, it seems increasingly obvious that the Snowden documents are geared towards NSA’s analysts and operations staff. In fact, many of the systems actually seem aimed at protecting knowledge of NSA’s cryptanalytic capabilities from NSA’s own operational staff (and other Five Eyes partners). As an analyst, it’s quite possible you’ll never learn why a given intercept was successfully decrypted.

 To put this a bit more succinctly: the lack of cryptanalytic red meat in these documents may not truly be representative of the NSA’s capabilities. It may simply be an artifact of Edward Snowden’s clearances at the time he left the NSA.

Tor

One of the most surprising aspects of the Snowden documents — to those of us in the security research community anyway — is the NSA’s relative ineptitude when it comes to de-anonymizing users of the Tor anonymous communications network.

The reason for our surprise is twofold. First, Tor was never really designed to stand up against a global passive adversary — that is, an attacker who taps a huge number of communications links. If there’s one thing we’ve learned from the Snowden leaks, the NSA (plus GCHQ) is the very definition of the term. In theory at least, Tor should be a relatively easy target for the agency.

The real surprise, though, is that despite this huge signals intelligence advantage, the NSA has barely even tested their ability to de-anonymize users. In fact, this leak provides the first concrete evidence that NSA is experimenting with traffic confirmation attacks to find the source of Tor connections. Even more surprising, their techniques are relatively naive, even when compared to what’s going on in the ‘research’ community.

This doesn’t mean you should view Tor as secure against the NSA. It seems very obvious that the agency has identified Tor as a high-profile target, and we know they have the resources to make much more headway against the network. The real surprise is that they haven’t tried harder. Maybe they’re trying now.

SSL/TLS and IPSEC

 A few months ago I wrote a long post speculating about how the NSA breaks SSL/TLS. Because it’s increasingly clear that the NSA does break these protocols, and at relatively large scale.

The new documents don’t tell us much we didn’t already know, but they do confirm the basic outlines of the attack. The first portion requires endpoints around the world that are capable of performing the raw decryption of SSL/TLS sessions provided they know the session keys. The second is a separate infrastructure located on US soil that can recover those session keys when needed.

All of the real magic happens within the key recovery infrastructure. These documents provide the first evidence that a major attack strategy for NSA/GCHQ involves key databases containing the private keys for major sites. For the RSA key exchange ciphersuites of TLS, a single private key is sufficient to recover vast amounts of session traffic — in real time or even after the fact.

The interesting question is how the NSA gets those private keys. The easiest answer may be the least technical. A different Snowden leak shows gives some reason to believe that the NSA may have relationships with employees at specific named U.S. entities, and may even operate personnel “under cover”. This would certainly be one way to build a key database.

 

But even without the James Bond aspect of this, there’s every reason to believe that NSA has other means to exfiltrate RSA keys from operators. During the period in question, we know of at least one vulnerability (Heartbleed) that could have been used to extract private keys from software TLS implementations. There are still other, unreported vulnerabilities that could be used today.

 Pretty much everything I said about SSL/TLS also applies to VPN protocols, with the additional detail that many VPNs use broken protocols and relatively poorly-secured pre-shared secrets that can in some cases be brute-forced. The NSA seems positively gleeful about this.

Open Source packages: Redphone, Truecrypt, PGP and OTR

The documents provide at least circumstantial evidence that some open source encryption technologies may thwart NSA surveillance. These include Truecrypt, ZRTP implementations such as RedPhone, PGP implementations, and Off the Record messaging. These packages have a few commonalities:

  1. They’re all open source, and relatively well studied by researchers.
  2. They’re not used at terribly wide scale (as compared to e.g., SSL or VPNs)
  3. They all work on an end-to-end basis and don’t involve service providers, software distributers, or other infrastructure that could be corrupted or attacked.

What’s at least as interesting is which packages are not included on this list. Major corporate encryption protocols such as iMessage make no appearance in these documents, despite the fact that they ostensibly provide end-to-end encryption. This may be nothing. But given all we know about NSA’s access to providers, this is definitely worrying.

A note on the ethics of the leak

Before I finish, it’s worth addressing one major issue with this reporting: are we, as citizens, entitled to this information? Would we be safer keeping it all under wraps? And is this all ‘activist nonsense‘?

This story, more than some others, skates close to a line. I think it’s worth talking about why this information is important.

To sum up a complicated issue, we live in a world where targeted surveillance is probably necessary and inevitable. The evidence so far indicates that NSA is very good at this kind of work, despite some notable failures in actually executing on the intelligence it produces.

Unfortunately, the documents released so far also show that a great deal of NSA/GCHQ surveillance is not targeted at all. Vast amounts of data are scooped up indiscriminately, in the hope that some of it will someday prove useful. Worse, the NSA has decided that bulk surveillance justifies its efforts to undermine many of the security technologies that protect our own information systems. The President’s own hand-picked review council has strongly recommended this practice be stopped, but their advice has — to all appearances — been disregarded. These are matters that are worthy of debate, but this debate hasn’t happened.

Unfortunate if we can’t enact changes to fix these problems, technology is probably about all that’s left. Over the next few years encryption technologies are going to be widely deployed, not only by individuals but also by corporations desperately trying to reassure overseas customers who doubt the integrity of US technology.

In that world, it’s important to know what works and doesn’t work. Insofar as this story tells us that, it makes us all better off.