## Friday, September 20, 2013

### RSA warns developers not to use RSA products

In today's news of the weird, RSA (a division of EMC) has recommended that developers desist from using the (allegedly) 'backdoored' Dual_EC_DRBG random number generator -- which happens to be the default in RSA's BSafe cryptographic toolkit. Youch.

In case you're missing the story here, Dual_EC_DRBG (which I wrote about yesterday) is the random number generator voted most likely to be backdoored by the NSA. The story here is that -- despite many valid concerns about this generator -- RSA went ahead and made it the default generator used for all cryptography in its flagship cryptography library. The implications for RSA and RSA-based products are staggering. In the worst case a modestly bad but by no means worst case, the NSA may be able to intercept SSL/TLS connections made by products implemented with BSafe.

So why would RSA pick Dual_EC as the default? You got me. Not only is Dual_EC hilariously slow -- which has real performance implications -- it was shown to be a just plain bad random number generator all the way back in 2006. By 2007, when Shumow and Ferguson raised the possibility of a backdoor in the specification, no sensible cryptographer would go near the thing.

And the killer is that RSA employs a number of highly distinguished cryptographers! It's unlikely that they'd all miss the news about Dual_EC.

We can only speculate about the past. But here in the present we get to watch RSA's CTO Sam Curry publicly defend RSA's choices. I sort of feel bad for the guy. But let's make fun of him anyway.

I'll take his statement line by line (Sam is the boldface):
"Plenty of other crypto functions (PBKDF2, bcrypt, scrypt) will iterate a hash 1000 times specifically to make it slower."
Password hash functions are built deliberately slow to frustrate dictionary attacks. Making a random number generator slow is just dumb.
At the time, elliptic curves were in vogue
Say what?
and hash-based RNG was under scrutiny.
Nonsense. A single obsolete hash based generator (FIPS 186) was under scrutiny -- and fixed. The NIST SP800-90 draft in which Dual_EC appeared ALSO provided three perfectly nice non-backdoored generators: two based on hash functions and one based on AES. BSafe even implements some of them. Sam, this statement is just plain misleading.
The hope was that elliptic curve techniques—based as they are on number theory—would not suffer many of the same weaknesses as other techniques (like the FIPS 186 SHA-1 generator) that were seen as negative
Dual-EC suffers exactly the same sort of weaknesses as FIPS 186. Unlike the alternative generators in NIST SP800-90 it has a significant bias and really should not be used in production systems. RSA certainly had access to this information after the analyses were published in 2006.
and Dual_EC_DRBG was an accepted and publicly scrutinized standard.
And every bit of public scrutiny said the same thing: this thing is broken! Grab your children and run away!
SP800-90 (which defines Dual EC DRBG) requires new features like continuous testing of the output, mandatory re-seeding,
The exact same can be said for the hash-based and AES-based alternative generators you DIDN'T choose from SP800-90.
optional prediction resistance and the ability to configure for different strengths.
So did you take advantage of any of these options as part of the BSafe defaults? Why not? How about the very simple mitigations that NIST added to SP800-90A as a means to remove concerns that the generator might have a backdoor? Anyone?
There's not too much else to say here. I guess the best way to put it is: this is all part of the process. First you find the disease. Then you see if you can cure it.

## Wednesday, September 18, 2013

### The Many Flaws of Dual_EC_DRBG

 The Dual_EC_DRBG generator from NIST SP800-90A.
Update 9/19: RSA warns developers not to use the default Dual_EC_DRBG generator in BSAFE. Oh lord.

As a technical follow up to my previous post about the NSA's war on crypto, I wanted to make a few specific points about standards. In particular I wanted to address the allegation that NSA inserted a backdoor into the Dual-EC pseudorandom number generator.

For those not following the story, Dual-EC is a pseudorandom number generator proposed by NIST for international use back in 2006. Just a few months later, Shumow and Ferguson made cryptographic history by pointing out that there might be an NSA backdoor in the algorithm. This possibility -- fairly remarkable for an algorithm of this type -- looked bad and smelled worse. If true, it spelled almost certain doom for anyone relying on Dual-EC to keep their system safe from spying eyes.

Now I should point out that much of this is ancient history. What is news today is the recent leak of classified documents that points a very emphatic finger towards Dual_EC, or rather, to an unnamed '2006 NIST standard'. The evidence that Dual-EC is this standard has now become so hard to ignore that NIST recently took the unprecedented step of warning implementers to avoid it altogether.

Better late than never.

In this post I'm going to try to explain the curious story of Dual-EC. While I'll do my best to keep this discussion at a high and non-mathematical level, be forewarned that I'm probably going to fail at least at a couple of points. I you're not the mood for all that, here's a short summary:
• In 2005-2006 NIST and NSA released a pseudorandom number generator based on elliptic curve cryptography. They released this standard -- with very little explanation -- both in the US and abroad
• This RNG has some serious issues with just being a good RNG. The presence of such obvious bugs was mysterious to cryptographers.
• In 2007 a pair of Microsoft researchers pointed out that these vulnerabilities combined to produce a perfect storm, which -- together with some knowledge that only NIST/NSA might have -- opened a perfect backdoor into the random number generator itself.
• This backdoor may allow the NSA to break nearly any cryptographic system that uses it.
If you're still with me, strap in. Here goes the long version.

Dual-EC

For a good summary on the history of Dual-EC-DRBG, see this 2007 post by Bruce Schneier. Here I'll just give the highlights.

Back in 2004-5, NIST decided to address a longstanding weakness of the FIPS standards, namely, the limited number of approved pseudorandom bit generator algorithms (PRGs, or 'DRBGs' in NIST parlance) available to implementers. This was actually a bit of an issue for FIPS developers, since the existing random number generators had some known design weaknesses.*

NIST's answer to this problem was Special Publication 800-90, parts of which were later wrapped up into the international standard ISO 18031. The NIST pub added four new generators to the FIPS canon. None these algorithms is a true random number generator in the sense that they collect physical entropy. Instead, what they do is process the (short) output of a true random number generator -- like the one in Linux -- conditioning and stretching this 'seed' into a large number of random-looking bits you can use to get things done.** This is particularly important for FIPS-certified cryptographic modules, since the FIPS 140-2 standards typically require you to use a DRBG as a kind of 'post-processing' -- even when you have a decent hardware generator.

The first three SP800-90 proposals used standard symmetric components like hash functions and block ciphers. Dual_EC_DRBG was the odd one out, since it employed mathematics more that are typically used to construct public-key cryptosystems. This had some immediate consequences for the generator: Dual-EC is slow in a way that its cousins aren't. Up to a thousand times slower.

Now before you panic about this, the inefficiency of Dual_EC is not necessarily one of its flaws! Indeed, the inclusion of an algebraic generator actually makes a certain amount of sense. The academic literature includes a distinguished history of provably secure PRGs based on on number theoretic assumptions, and it certainly didn't hurt to consider one such construction for standardization. Most developers would probably use the faster symmetric alternatives, but perhaps a small number would prefer the added confidence of a provably-secure construction.

Unfortunately, here is where NIST ran into their first problem with Dual_EC.
Flaw #1: Dual-EC has no security proof.
Let me spell this out as clearly as I can. In the course of proposing this complex and slow new PRG where the only damn reason you'd ever use the thing is for its security reduction, NIST forgot to provide one. This is like selling someone a Mercedes and forgetting to attach the hood ornament.

I'd like to say this fact alone should have damned Dual_EC, but sadly this is par for the course for NIST -- which treats security proofs like those cool Japanese cookies you can never find. In other words, a fancy, exotic luxury. Indeed, NIST has a nasty habit of dumping proof-writing work onto outside academics, often after the standard has been finalized and implemented in products everywhere.

So when NIST put forward its first draft of SP800-90 in 2005, academic cryptographers were left to analyze it from scratch. Which, to their great credit, they were quite successful at.

The first thing reviewers noticed is that Dual-EC follows a known design paradigm -- it's a weird variant of an elliptic curve linear congruential generator. However they also noticed that NIST had made some odd rookie mistakes.

Now here we will have to get slightly wonky -- though I will keep mathematics to a minimum. (I promise it will all come together in the end!) Constructions like Dual-EC have basically two stages:
1. A stage that generates a series of pseudorandom elliptic curve points. Just like on the graph at right, an elliptic curve point is a pair (x, y) that satisfies an elliptic curve equation. In general, both x and y are elements of a finite field, which for our purposes means they're just large integers.***

The main operation of the PRNG is to apply mathematical operations to points on the elliptic curve, in order to generate new points that are pseudorandom -- i.e., are indistinguishable from random points in some subgroup.

And the good news is that Dual-EC seems to do this first part beautifully! In fact Brown and Gjøsteen even proved that this part of the generator is sound provided that the Decisional Diffie-Hellman problem is hard in the specific elliptic curve subgroup. This is a well studied hardness assumption so we can probably feel pretty confident in this proof.

2. Extract pseudorandom bits from the generated EC points. While the points generated by Dual-EC may be pseudorandom, that doesn't mean the specific (x, y) integer pairs are random bitstrings. For one thing, 'x' and 'y' are not really bitstrings at all, they're integers less than some prime number. Most pairs don't satisfy the curve equation or are not in the right subgroup. Hence you can't just output the raw x or y values and expect them to make good pseudorandom bits.

Thus the second phase of the generator is to 'extract' some (but not all) of the bits from the EC points. Traditional literature designs do all sorts of things here -- including hashing the point or dropping up to half of the bits of the x-coordinate. Dual-EC does something much simpler: it grabs the x coordinate, throws away the most significant 16-18 bits, and outputs the rest.
In 2006, first Gjøsteen and later Schoenmakers and Sidorenko took a close look at Dual-EC and independently came up with a surprising result:
Flaw #2: Dual-EC outputs too many bits.
Unlike those previous EC PRGs which output anywhere from 2/3 to half of the bits from the x-coordinate, Dual-EC outputs nearly the entire thing.

This is good for efficiency, but unfortunately it also gives Dual-EC a bias. Due to some quirks in the mathematics of the field operations, an attacker can now predict the next bits of Dual-EC output with a fairly small -- but non-trivial -- success probability, in the range of 0.1%. While this number may seem small to non-cryptographers, it's basically a hanging offense for a cryptographic random number generator where probability of predicting a future bit should be many orders of magnitude lower.

What's just plain baffling is that this flaw ever saw the light of day. After all, the specification was developed by bright people at NIST -- in collaboration with NSA. Either of those groups should easily have discovered a bug like this, especially since this issue had been previously studied. Indeed, within a few months of public release, two separate groups of academic cryptographers found it, and were able to implement an attack using standard PC equipment.

So in summary, the bias is mysterious and it seems to be very much an 'own-goal' on the NSA's part. Why in the world would they release so much information from each EC point? It's hard to say, but a bit more investigation reveals some interesting consequences:
Flaw #3: You can guess the original EC point from looking at the output bits.
By itself this isn't really a flaw, but will turn out to be interesting in just a minute.

Since Dual-EC outputs so many bits from the x-coordinate of each point -- all but the most significant 16 bits -- it's relatively easy to guess the original source point by simply brute-forcing the missing 16 bits and solving the elliptic curve equation for y. (This is all high-school algebra, I swear!)

While this process probably won't uniquely identify the original (x, y), it'll give you a modestly sized list of candidates. Moreover with only 16 missing bits the search can be done quickly even on a desktop computer. Had Dual_EC thrown away more bits of the x-coordinate, this search would not have been feasible at all.

So what does this mean? In general, recovering the EC point shouldn't actually be a huge problem. In theory it could lead to a weakness -- say predicting future outputs -- but in a proper design you would still have to solve a discrete logarithm instance for each and every point in order to predict the next bytes output by the generator.

And here is where things get interesting.
Flaw #4: If you know a certain property about the Dual_EC parameters, and can recover an output point, you can predict all subsequent outputs of the generator.
Did I tell you this would get interesting in a minute? I totally did.

The next piece of our puzzle was discovered by Microsoft researchers Dan Shumow and Niels Ferguson, and announced at the CRYPTO 2007 rump session. I think this result can best be described via the totally intuitive diagram below. (Don't worry, I'll explain it!)
 Annotated diagram from Shumow-Ferguson presentation (CRYPTO 2007). Colorful elements were added by yours truly. Thick green arrows mean 'this part is easy to reverse'. Thick red arrows should mean the opposite. Unless you're the NSA.
The Dual-EC generator consists of two stages: a portion that generates the output bits (right) and a part that updates the internal state (left).

Starting from the "r_i" value (circled, center) and heading right, the bit generation part first computes the output point using the function "r_i * Q" -- where Q is an elliptic curve point defined in the parameters -- then truncates 16 bits its off its x-coordinate to get the raw generator output. The "*" operator here describes elliptic point multiplication, which is a complex operation that should be relatively hard to invert.

Note that everything after the point multiplication should be easy to invert and recover from the output, as we discussed in the previous section.

Every time the generator produces one block of output bits, it also updates its internal state. This is designed to prevent attacks where someone compromises the internal values of a working generator, then uses this value to wind the generator backwards and guess past outputs. Starting again from the circled "r_i" value, the generator now heads upwards and computes the point "r_i * P" where P is a different elliptic curve point also described in the parameters. It then does some other stuff.

The theory here is that P and Q should be random points, and thus it should be difficult to find "r_i * P" used for state update even if you know the output point "r_i * Q" -- which I stress you do know, because it's easy to find. Going from one point to the other requires you to know a relationship between P and Q, which you shouldn't actually know since they're supposed to be random values. The difficulty of this is indicated by the thick red arrow.
 Looks totally kosher to me. (Source: NIST SP800-90A)

There is, however, one tiny little exception to this rule. What if P and Q aren't entirely random values? What if you chose them yourself specifically so you'd know the mathematical relationship between the two points?

In this case it turns out you can easily compute the next PRG state after recovering a single output point (from 32 bytes of RNG output). This means you can follow the equations through and predict the next output. And the next output after that. And on forever and forever.****

This is a huge deal in the case of SSL/TLS, for example. If I use the Dual-EC PRG to generate the "Client Random" nonce transmitted in the beginning of an SSL connection, then the NSA will be able to predict the "Pre-Master" secret that I'm going to generate during the RSA handshake. Given this information the connection is now a cleartext read. This is not good.

So now you should all be asking the most important question of all: how the hell did the NSA generate the specific P and Q values recommended in Appendix A of Dual-EC-DRBG? And do they know the relationship that allows them to run this attack? All of which brings us to:
Flaw #5Nobody knows where the recommended parameters came from.
And if you think that's problematic, welcome to the club.

But why? And where is Dual-EC used?

The ten million dollar question of Dual-EC is why the NSA would stick such an obviously backdoored algorithm into an important standard. Keep in mind that cryptographers found the major (bias) vulnerabilities almost immediately after Dual-EC shipped. The possibility of a 'backdoor' was announced in summer 2007. Who would still use it?

A few people have gone through the list of CMVP-evaluated products and found that the answer is: quite a few people would. Most certify Dual-EC simply because it's implemented in OpenSSL-FIPS, and they happen to use that library. But at least one provider certifies it exclusively. Yuck.

 Hardcoded constants from the OpenSSL-FIPS implementation of Dual_EC_DRBG. Recognize 'em?
It's worth keeping in mind that NIST standards carry a lot of weight -- even those that might have a backdoor. Folks who aren't keeping up on the latest crypto results could still innocently use the thing, either by accident or (less innocently) because the government asked them to. Even if they don't use it, they might include the code in their product -- say through the inclusion of OpenSSL-FIPS or MS Crypto API -- which means it's just a function call away from being surreptitiously activated.

Which is why people need to stop including Dual-EC immediately. We have no idea what it's for, but it needs to go away. Now.

The last point I want to make is that the vulnerabilities in Dual-EC have precisely nothing to do with the specifics of the NIST standard elliptic curves themselves. The 'back door' in Dual-EC comes exclusively from the relationship between P and Q -- the latter of which is published only in the Dual-EC specification. The attack can work even if you don't use the NIST pseudorandom curves.

However, the revelations about NIST and the NSA certainly make it worth our time to ask whether these curves themselves are somehow weak. The best answer to that question is: we don't know. Others have observed that NIST's process for generating the curves leaves a lot to be desired. But including some kind of hypothetical backdoor would be a horrible, horrific idea -- one that would almost certainly blow back at us.

You'd think people with common sense would realize this. Unfortunately we can't count on that anymore.

Thanks to Tanja Lange for her assistance proofing this post. Any errors in the text are entirely mine.

Notes:

* My recollection of this period is hazy, but prior to SP800-90 the two most common FIPS DRBGs in production were (1) the SHA1-based DSA generator of FIPS 186-2 and (2) ANSI X9.31. The DSA generator was a special-purpose generator based on SHA1, and was really designed just for that purpose. ANSI X9.31 used block ciphers, but suffered from some more subtle weaknesses it retained from the earlier X9.17 generator. These were pointed out by Kelsey, Schneier, Wagner and Hall.

** This is actually a requirement of the FIPS 140-2 specification. Since FIPS does not approve any true random number generators, it instead mandates that you run your true RNG output through a DRBG (PRNG) first. The only exception is if your true RNG has been approved 'for classified use'.

*** Specifically, x and y are integers in the range 0 to p-1 where p is a large prime number. A point is a pair (x, y) such that $y^2 = x^3 + ax + b$ mod p. The values a and b are defined as part of the curve parameters.

**** The process of predicting future outputs involves a few guesses, since you don't know the exact output point (and had to guess at the missing 16 bits), but you can easily reduce this to a small set of candidates -- then it's just a question of looking at a few more bits of RNG output until you guess the right one.

## Tuesday, September 10, 2013

### A note on the NSA, the future and fixing mistakes

Readers of this blog will know this has been an interesting couple of days for me. I have very mixed feelings about all this. On the one hand, it's brought this blog a handful of new readers who might not have discovered it otherwise. On the other hand, it's made me a part of the story in a way I don't deserve to be.

After speaking with my colleagues and (most importantly) with my wife, I thought I might use the last few seconds of my inadvertent notoriety to make some of highly non-technical points about the recent NSA revelations and my decision to blog about them.

I believe my first point should be self-evident: the NSA has made a number of terrible mistakes. These range from policy decisions to technical direction, to matters of their own internal security. There may have been a time when these mistakes could have been mitigated or avoided, but that time has passed. Personally I believe it passed even before Edward Snowden made his first contact with the press. But the disclosures of classified documents have set those decisions in stone.

Given these mistakes, we're now faced with the job of cleaning up the mess. To that end there are two sets of questions: public policy questions -- who should the NSA be spying on and how far should they be allowed to go in pursuit of that goal? And a second set of more technical questions: how do we repair the technological blowback from these decisions?

There are many bright people -- quite a few in Congress -- who are tending to the first debate. While I have my opinions about this, they're (mostly) not the subject of this blog. Even if they were, I would probably be the wrong person to discuss them.

So my concern is the technical question. And I stress that while I label this 'technical', it isn't a question of equations and logic gates. The tech sector is one of the fastest growing and most innovative areas of the US economy. I believe the NSA's actions have caused long-term damage to our credibility, in a manner that threatens our economic viability as well as, ironically, our national security.

The interesting question to me -- as an American and as someone who cares about the integrity of speech -- is how we restore faith in our technology. I don't have the answers to this question right now. Unfortunately this is a long-term problem that will consume the output of researchers and technologists far more talented than I. I only hope to be involved in the process.

So while I know there are people at NSA who must be cursing Edward Snowden's name and wishing we'd all stop talking about this. Too late. I hope that they understand the game we're playing now. Their interests as well as mine now depend on repairing the damage. Downplaying the extent of the damage, or trying to restrict access to (formerly) classified documents does nobody any good.

It's time to start fixing things.

## Friday, September 6, 2013

### On the NSA

Let me tell you the story of my tiny brush with the biggest crypto story of the year.

A few weeks ago I received a call from a reporter at ProPublica, asking me background questions about encryption. Right off the bat I knew this was going to be an odd conversation, since this gentleman seemed convinced that the NSA had vast capabilities to defeat encryption. And not in a 'hey, d'ya think the NSA has vast capabilities to defeat encryption?' kind of way. No, he'd already established the defeating. We were just haggling over the details.

Oddness aside it was a fun (if brief) set of conversations, mostly involving hypotheticals. If the NSA could do this, how might they do it? What would the impact be? I admit that at this point one of my biggest concerns was to avoid coming off like a crank. After all, if I got quoted sounding too much like an NSA conspiracy nut, my colleagues would laugh at me. Then I might not get invited to the cool security parties.

All of this is a long way of saying that I was totally unprepared for today's bombshell revelations describing the NSA's efforts to defeat encryption. Not only does the worst possible hypothetical I discussed appear to be true, but it's true on a scale I couldn't even imagine. I'm no longer the crank. I wasn't even close to cranky enough.

And since I never got a chance to see the documents that sourced the NYT/ProPublica story -- and I would give my right arm to see them -- I'm determined to make up for this deficit with sheer speculation. Which is exactly what this blog post will be.

'Bullrun' and 'Cheesy Name'

If you haven't read the ProPublica/NYT or Guardian stories, you probably should. The TL;DR is that the NSA has been doing some very bad things. At a combined cost of \$250 million per year, they include:
1. Tampering with national standards (NIST is specifically mentioned) to promote weak, or otherwise vulnerable cryptography.
2. Influencing standards committees to weaken protocols.
3. Working with hardware and software vendors to weaken encryption and random number generators.
4. Attacking the encryption used by 'the next generation of 4G phones'.
5. Obtaining cleartext access to 'a major internet peer-to-peer voice and text communications system' (Skype?)
6. Identifying and cracking vulnerable keys.
7. Establishing a Human Intelligence division to infiltrate the global telecommunications industry.
8. And worst of all (to me): somehow decrypting SSL connections.
All of these programs go by different code names, but the NSA's decryption program goes by the name 'Bullrun' so that's what I'll use here.

How to break a cryptographic system

There's almost too much here for a short blog post, so I'm going to start with a few general thoughts. Readers of this blog should know that there are basically three ways to break a cryptographic system. In no particular order, they are:
1. Attack the cryptography. This is difficult and unlikely to work against the standard algorithms we use (though there are exceptions like RC4.) However there are many complex protocols in cryptography, and sometimes they are vulnerable.
2. Go after the implementation. Cryptography is almost always implemented in software -- and software is a disaster. Hardware isn't that much better. Unfortunately active software exploits only work if you have a target in mind. If your goal is mass surveillance, you need to build insecurity in from the start. That means working with vendors to add backdoors.
3. Access the human side. Why hack someone's computer if you can get them to give you the key?
Bruce Schneier, who has seen the documents, says that 'math is good', but that 'code has been subverted'. He also says that the NSA is 'cheating'. Which, assuming we can trust these documents, is a huge sigh of relief. But it also means we're seeing a lot of (2) and (3) here.

So which code should we be concerned about? Which hardware?

 SSL Servers by OS type. Source: Netcraft.
This is probably the most relevant question. If we're talking about commercial encryption code, the lion's share of it uses one of a small number of libraries. The most common of these are probably the Microsoft CryptoAPI (and Microsoft SChannel) along with the OpenSSL library.

Of the libraries above, Microsoft is probably due for the most scrutiny. While Microsoft employs good (and paranoid!) people to vet their algorithms, their ecosystem is obviously deeply closed-source. You can view Microsoft's code (if you sign enough licensing agreements) but you'll never build it yourself. Moreover they have the market share. If any commercial vendor is weakening encryption systems, Microsoft is probably the most likely suspect.

And this is a problem because Microsoft IIS powers around 20% of the web servers on the Internet -- and nearly forty percent of the SSL servers! Moreover, even third-party encryption programs running on Windows often depend on CAPI components, including the random number generator. That makes these programs somewhat dependent on Microsoft's honesty.

Probably the second most likely candidate is OpenSSL. I know it seems like heresy to imply that OpenSSL -- an open source and widely-developed library -- might be vulnerable. But at the same time it powers an enormous amount of secure traffic on the Internet, thanks not only to the dominance of Apache SSL, but also due to the fact that OpenSSL is used everywhere. You only have to glance at the FIPS CMVP validation lists to realize that many 'commercial' encryption products are just thin wrappers around OpenSSL.

Unfortunately while OpenSSL is open source, it periodically coughs up vulnerabilities. Part of this is due to the fact that it's a patchwork nightmare originally developed by a programmer who thought it would be a fun way to learn Bignum division.* Part of it is because crypto is unbelievably complicated. Either way, there are very few people who really understand the whole codebase.

On the hardware side (and while we're throwing out baseless accusations) it would be awfully nice to take another look at the Intel Secure Key integrated random number generators that most Intel processors will be getting shortly. Even if there's no problem, it's going to be an awfully hard job selling these internationally after today's news.

Which standards?

From my point of view this is probably the most interesting and worrying part of today's leak. Software is almost always broken, but standards -- in theory -- get read by everyone. It should be extremely difficult to weaken a standard without someone noticing. And yet the Guardian and NYT stories are extremely specific in their allegations about the NSA weakening standards.

The Guardian specifically calls out the National Institute of Standards and Technology (NIST) for a standard they published in 2006. Cryptographers have always had complicated feelings about NIST, and that's mostly because NIST has a complicated relationship with the NSA.

Here's the problem: the NSA ostensibly has both a defensive and an offensive mission. The defensive mission is pretty simple: it's to make sure US information systems don't get pwned. A substantial portion of that mission is accomplished through fruitful collaboration with NIST, which helps to promote data security standards such as the Federal Information Processing Standards (FIPS) and NIST Special Publications.

I said cryptographers have complicated feelings about NIST, and that's because we all know that the NSA has the power to use NIST for good as well as evil. Up until today there's been no real evidence of malice, despite some occasional glitches -- and compelling evidence that at least one NIST cryptographic standard could have contained a backdoor. But now maybe we'll have to re-evaluate that relationship. As utterly crazy as it may seem.

Unfortunately, we're highly dependent on NIST standards, ranging from pseudo-random number generators to hash functions and ciphers, all the way to the specific elliptic curves we use in SSL/TLS. While the possibility of a backdoor in any of these components does seem remote, trust has been violated. It's going to be an absolute nightmare ruling it out.

Which people?

Probably the biggest concern in all this is the evidence of collaboration between the NSA and unspecified 'telecom providers'. We already know that the major US (and international) telecom carriers routinely assist the NSA in collecting data from fiber-optic cables. But all this data is no good if it's encrypted.

While software compromises and weak standards can help the NSA deal with some of this, by far the easiest way to access encrypted data is to simply ask for -- or steal -- the keys. This goes for something as simple as cellular encryption (protected by a single key database at each carrier) all the way to SSL/TLS which is (most commonly) protected with a few relatively short RSA keys.

The good and bad thing is that as the nation hosting the largest number of popular digital online services (like Google, Facebook and Yahoo) many of those critical keys are located right here on US soil. Simultaneously, the people communicating with those services -- i.e., the 'targets' -- may be foreigners. Or they may be US citizens. Or you may not know who they are until you scoop up and decrypt all of their traffic and run it for keywords.

Which means there's a circumstantial case that the NSA and GCHQ are either directly accessing Certificate Authority keys** or else actively stealing keys from US providers, possibly (or probably) without executives' knowledge. This only requires a small number of people with physical or electronic access to servers, so it's quite feasible.*** The one reason I would have ruled it out a few days ago is because it seems so obviously immoral if not illegal, and moreover a huge threat to the checks and balances that the NSA allegedly has to satisfy in order to access specific users' data via programs such as PRISM.

To me, the existence of this program is probably the least unexpected piece of all the news today. Somehow it's also the most upsetting.

So what does it all mean?

I honestly wish I knew. Part of me worries that the whole security industry will talk about this for a few days, then we'll all go back to our normal lives without giving it a second thought. I hope we don't, though. Right now there are too many unanswered questions to just let things lie.

The most likely short-term effect is that there's going to be a lot less trust in the security industry. And a whole lot less trust for the US and its software exports. Maybe this is a good thing. We've been saying for years that you can't trust closed code and unsupported standards: now people will have to verify.

Even better, these revelations may also help to spur a whole burst of new research and re-designs of cryptographic software. We've also been saying that even open code like OpenSSL needs more expert eyes. Unfortunately there's been little interest in this, since the clever researchers in our field view these problems as 'solved' and thus somewhat uninteresting.

What we learned today is that they're solved all right. Just not the way we thought.

Notes:

* The original version of this post repeated a story I heard recently (from a credible source!) about Eric Young writing OpenSSL as a way to learn C. In fact he wrote it as a way to learn Bignum division, which is way cooler. Apologies Eric!

** I had omitted the Certificate Authority route from the original post due to an oversight -- thanks to Kenny Patterson for pointing this out -- but I still think this is a less viable attack for passive eavesdropping (that does not involve actively running a man in the middle attack). And it seems that much of the interesting eavesdropping here is passive.

*** The major exception here is Google, which deploys Perfect Forward Secrecy for many of its connections, so key theft would not work here. To deal with this the NSA would have to subvert the software or break the encryption in some other way.