On the NSA

Let me tell you the story of my tiny brush with the biggest crypto story of the year.

A few weeks ago I received a call from a reporter at ProPublica, asking me background questions about encryption. Right off the bat I knew this was going to be an odd conversation, since this gentleman seemed convinced that the NSA had vast capabilities to defeat encryption. And not in a ‘hey, d’ya think the NSA has vast capabilities to defeat encryption?’ kind of way. No, he’d already established the defeating. We were just haggling over the details.

Oddness aside it was a fun (if brief) set of conversations, mostly involving hypotheticals. If the NSA could do this, how might they do it? What would the impact be? I admit that at this point one of my biggest concerns was to avoid coming off like a crank. After all, if I got quoted sounding too much like an NSA conspiracy nut, my colleagues would laugh at me. Then I might not get invited to the cool security parties.

All of this is a long way of saying that I was totally unprepared for today’s bombshell revelations describing the NSA’s efforts to defeat encryption. Not only does the worst possible hypothetical I discussed appear to be true, but it’s true on a scale I couldn’t even imagine. I’m no longer the crank. I wasn’t even close to cranky enough.

And since I never got a chance to see the documents that sourced the NYT/ProPublica story — and I would give my right arm to see them — I’m determined to make up for this deficit with sheer speculation. Which is exactly what this blog post will be.

‘Bullrun’ and ‘Cheesy Name’ 

If you haven’t read the ProPublica/NYT or Guardian stories, you probably should. The TL;DR is that the NSA has been doing some very bad things. At a combined cost of $250 million per year, they include:

  1. Tampering with national standards (NIST is specifically mentioned) to promote weak, or otherwise vulnerable cryptography.
  2. Influencing standards committees to weaken protocols.
  3. Working with hardware and software vendors to weaken encryption and random number generators.
  4. Attacking the encryption used by ‘the next generation of 4G phones‘.
  5. Obtaining cleartext access to ‘a major internet peer-to-peer voice and text communications system’ (Skype?)
  6. Identifying and cracking vulnerable keys.
  7. Establishing a Human Intelligence division to infiltrate the global telecommunications industry.
  8. And worst of all (to me): somehow decrypting SSL connections.

All of these programs go by different code names, but the NSA’s decryption program goes by the name ‘Bullrun’ so that’s what I’ll use here.

How to break a cryptographic system

There’s almost too much here for a short blog post, so I’m going to start with a few general thoughts. Readers of this blog should know that there are basically three ways to break a cryptographic system. In no particular order, they are:

  1. Attack the cryptography. This is difficult and unlikely to work against the standard algorithms we use (though there are exceptions like RC4.) However there are many complex protocols in cryptography, and sometimes they are vulnerable.
  2. Go after the implementation. Cryptography is almost always implemented in software — and software is a disaster. Hardware isn’t that much better. Unfortunately active software exploits only work if you have a target in mind. If your goal is mass surveillance, you need to build insecurity in from the start. That means working with vendors to add backdoors.
  3. Access the human side. Why hack someone’s computer if you can get them to give you the key?

Bruce Schneier, who has seen the documents, says that ‘math is good’, but that ‘code has been subverted’. He also says that the NSA is ‘cheating‘. Which, assuming we can trust these documents, is a huge sigh of relief. But it also means we’re seeing a lot of (2) and (3) here.

So which code should we be concerned about? Which hardware?

SSL Servers by OS type. Source: Netcraft.

This is probably the most relevant question. If we’re talking about commercial encryption code, the lion’s share of it uses one of a small number of libraries. The most common of these are probably the Microsoft CryptoAPI (and Microsoft SChannel) along with the OpenSSL library.

Of the libraries above, Microsoft is probably due for the most scrutiny. While Microsoft employs good (and paranoid!) people to vet their algorithms, their ecosystem is obviously deeply closed-source. You can view Microsoft’s code (if you sign enough licensing agreements) but you’ll never build it yourself. Moreover they have the market share. If any commercial vendor is weakening encryption systems, Microsoft is probably the most likely suspect.

And this is a problem because Microsoft IIS powers around 20% of the web servers on the Internet — and nearly forty percent of the SSL servers! Moreover, even third-party encryption programs running on Windows often depend on CAPI components, including the random number generator. That makes these programs somewhat dependent on Microsoft’s honesty.

Probably the second most likely candidate is OpenSSL. I know it seems like heresy to imply that OpenSSL — an open source and widely-developed library — might be vulnerable. But at the same time it powers an enormous amount of secure traffic on the Internet, thanks not only to the dominance of Apache SSL, but also due to the fact that OpenSSL is used everywhere. You only have to glance at the FIPS CMVP validation lists to realize that many ‘commercial’ encryption products are just thin wrappers around OpenSSL.

Unfortunately while OpenSSL is open source, it periodically coughs up vulnerabilities. Part of this is due to the fact that it’s a patchwork nightmare originally developed by a programmer who thought it would be a fun way to learn Bignum division.* Part of it is because crypto is unbelievably complicated. Either way, there are very few people who really understand the whole codebase.

On the hardware side (and while we’re throwing out baseless accusations) it would be awfully nice to take another look at the Intel Secure Key integrated random number generators that most Intel processors will be getting shortly. Even if there’s no problem, it’s going to be an awfully hard job selling these internationally after today’s news.

Which standards?

From my point of view this is probably the most interesting and worrying part of today’s leak. Software is almost always broken, but standards — in theory — get read by everyone. It should be extremely difficult to weaken a standard without someone noticing. And yet the Guardian and NYT stories are extremely specific in their allegations about the NSA weakening standards.

The Guardian specifically calls out the National Institute of Standards and Technology (NIST) for a standard they published in 2006. Cryptographers have always had complicated feelings about NIST, and that’s mostly because NIST has a complicated relationship with the NSA.

Here’s the problem: the NSA ostensibly has both a defensive and an offensive mission. The defensive mission is pretty simple: it’s to make sure US information systems don’t get pwned. A substantial portion of that mission is accomplished through fruitful collaboration with NIST, which helps to promote data security standards such as the Federal Information Processing Standards (FIPS) and NIST Special Publications.

I said cryptographers have complicated feelings about NIST, and that’s because we all know that the NSA has the power to use NIST for good as well as evil. Up until today there’s been no real evidence of malice, despite some occasional glitches — and compelling evidence that at least one NIST cryptographic standard could have contained a backdoor. But now maybe we’ll have to re-evaluate that relationship. As utterly crazy as it may seem.

Unfortunately, we’re highly dependent on NIST standards, ranging from pseudo-random number generators to hash functions and ciphers, all the way to the specific elliptic curves we use in SSL/TLS. While the possibility of a backdoor in any of these components does seem remote, trust has been violated. It’s going to be an absolute nightmare ruling it out.

Which people?

Probably the biggest concern in all this is the evidence of collaboration between the NSA and unspecified ‘telecom providers’. We already know that the major US (and international) telecom carriers routinely assist the NSA in collecting data from fiber-optic cables. But all this data is no good if it’s encrypted.

While software compromises and weak standards can help the NSA deal with some of this, by far the easiest way to access encrypted data is to simply ask for — or steal — the keys. This goes for something as simple as cellular encryption (protected by a single key database at each carrier) all the way to SSL/TLS which is (most commonly) protected with a few relatively short RSA keys.

The good and bad thing is that as the nation hosting the largest number of popular digital online services (like Google, Facebook and Yahoo) many of those critical keys are located right here on US soil. Simultaneously, the people communicating with those services — i.e., the ‘targets’ — may be foreigners. Or they may be US citizens. Or you may not know who they are until you scoop up and decrypt all of their traffic and run it for keywords.

Which means there’s a circumstantial case that the NSA and GCHQ are either directly accessing Certificate Authority keys** or else actively stealing keys from US providers, possibly (or probably) without executives’ knowledge. This only requires a small number of people with physical or electronic access to servers, so it’s quite feasible.*** The one reason I would have ruled it out a few days ago is because it seems so obviously immoral if not illegal, and moreover a huge threat to the checks and balances that the NSA allegedly has to satisfy in order to access specific users’ data via programs such as PRISM.

To me, the existence of this program is probably the least unexpected piece of all the news today. Somehow it’s also the most upsetting.

So what does it all mean?

I honestly wish I knew. Part of me worries that the whole security industry will talk about this for a few days, then we’ll all go back to our normal lives without giving it a second thought. I hope we don’t, though. Right now there are too many unanswered questions to just let things lie.

The most likely short-term effect is that there’s going to be a lot less trust in the security industry. And a whole lot less trust for the US and its software exports. Maybe this is a good thing. We’ve been saying for years that you can’t trust closed code and unsupported standards: now people will have to verify.

Even better, these revelations may also help to spur a whole burst of new research and re-designs of cryptographic software. We’ve also been saying that even open code like OpenSSL needs more expert eyes. Unfortunately there’s been little interest in this, since the clever researchers in our field view these problems as ‘solved’ and thus somewhat uninteresting.

What we learned today is that they’re solved all right. Just not the way we thought.


* The original version of this post repeated a story I heard recently (from a credible source!) about Eric Young writing OpenSSL as a way to learn C. In fact he wrote it as a way to learn Bignum division, which is way cooler. Apologies Eric!

** I had omitted the Certificate Authority route from the original post due to an oversight — thanks to Kenny Patterson for pointing this out — but I still think this is a less viable attack for passive eavesdropping (that does not involve actively running a man in the middle attack). And it seems that much of the interesting eavesdropping here is passive.

*** The major exception here is Google, which deploys Perfect Forward Secrecy for many of its connections, so key theft would not work here. To deal with this the NSA would have to subvert the software or break the encryption in some other way.

114 thoughts on “On the NSA

  1. Matthew,

    Thanks for this (as ever) excellent overview of the issues.

    Another “route to private keys” that is plausible, and supported by one of the diagrams in the Guardian article, is asking CAs to provide them for key pairs that they have generated on behalf of users/website.


    Kenny Paterson

  2. I presume it would be practical for the NSA to demand that CAs sign a CSR provided by the NSA that has, for example, mail.google.com in the CN field (or to demand the CA's private keys so they can just sign them themselves). Using such a cert for a man in the middle would run the risk of being noticed by highly savvy users who check the certificate fingerprints they receive against ones obtained through other channels, but I'm not sure if anybody does that…

  3. Yes, absolutely. I should have mentioned this in the first place. However the limitation of CA impersonation is that it only works for active (MITM) attacks. The interesting question here seems to be how the NSA/GCHQ obtain plaintext from fiber taps, which seem more difficult to actively attack at any kind of. But I've updated the post to mention this.

  4. In 90's NSA made the mistake of trying to backdoor things openly – through mandating that in law. That obviously didn't work, and it couldn't work in country like pre-WTC USA or UK. So they switched back to good old HUMINT, that was seemingly not very much respected by the techies, and it seems to work much better. And that's actually something that you would expect an electronic intelligence agency to do, because that's what they are paid for. I'm pretty sure everyone – Russia, France, China – do it exactly the same way, with the difference that in countries like Russia or China it's much easier, as the security industry is saturated with people in uniforms. Plus citizen control is much weaker, and there's virtually no whistleblowing.

  5. Generate a 48-bit random number. Hash it to 256 bits. Encrypt the time of day in milliseconds with a key known to you and exclusive-or it with the hash. The result looks like a 256-bit random number but takes far fewer than 2^256 guesses to find.

  6. The case for elliptic curves is interesting. What are the constants Schneier is talking about?

    It is generally assumed that NSA has put backdoor into the Dual_EC_DRBG (see wikipedia http://en.wikipedia.org/wiki/Dual_EC_DRBG). There is a constant in the Dual_EC_DRBG , but it is actually a point in the elliptic curve – it not a coefficient that defines the curve which many seem to assume that Schneier is meaning. The article of Dan Shumow and Niels Ferguson says that 'Point Q is a specified constant. It is not stated how it was derived.' (http://rump2007.cr.yp.to/15-shumow.pdf)

    The standard curves of NIST do not specify constants but coefficients that define the curves – I assume. Some elliptic curve specialist should give us light on this matter.

  7. First, the Certificate Authority angle is silly. I would suggest removing it (or at least rephrasing) lest some readers conclude self-signed certs are more secure.

    Second, I will ask the same question here as on Twitter: Why so much confidence in elliptic curve cryptography? Yes, finite fields have more structure… But that structure has been explored publicly for centuries. Why do you find it so unlikely ECs have some exploitable structure (“smoothness” property or whatever) known to NSA's mathematicians but unknown to ours?

    This hypothesis is very consistent with the revelations so far.

    The Times article says “'We are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit Internet traffic'”. And “One goal in the agency’s 2013 budget request was to 'influence policies, standards and specifications for commercial public key technologies'”.

    Schneier says “Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.”

    Guess which key exchange algorithm the latest OpenSSL prefers by default? (I do not know about CryptoAPI, but I have a guess…) Could “groundbreaking capabilities” simply mean convincing the world to use certain elliptic curves for key exchange?

  8. In light of recent events, I cant' help but feel with renewed emphasis something I've thought for a long time now: Its' time for a TLS 2.0. Not 1.3; 2.0: a comprehensive overhaul.

    TLS has grown into a beast of a standard; some of the core parts of the design are, in a modern light, at the least questionable. OpenSSL and NSS have significant complexity involved in mitigating these bits of bad design.

    What we need is a simple, concise security layer, easily analyzed and easily audited. TLS is not that.

    We should take TLS1.2 and, for each defect in the protocol which requires implementation mitigation, overhaul matters to avoid that.

    I think our aim should be something along the lines of a protocol which can be implemented in ~5000 lines of clear and commented C, (i.e. not following the OpenSSL coding conventions of opacity!) because it's bound to end up implemented in C, and 5000 seems like it should be more than enough for a simple if inefficient implementation

  9. Shouldn't we assume, that the NSA is running their own (intermediate CA) anyhow?
    An intelligence agency that enforces cooperation of Apple, Microsoft, Google & Co should not be able to “convince” only one out of more than hundred CAs? You must be kidding…

  10. Matt, I didn't mean CA impersonation attacks. Look at the diagram in the Guardian article. What could “CA Service Request” refer to? I inferred: provision of RSA decryption keys for websites (and not RSA signing keys for CAs). Our friends in the TLAs do indeed like to keep things passive.

  11. I doubt it. Man-in-the-middle attacks are easy to detect after the fact, making them risky on a large scale. And they are totally unnecessary if you have effective passive attacks, which NSA does.

    If you think this is how they snoop on SSL, then you have too little imagination about their true capabilities, in my opinion.

  12. The name “bullrun” to me implies that they might be able to get some keys/sessions broken and not others – but importantly, like a bull in a bull run, they cannot really target specifically. That sounds to me like either some mathematical breakthrough affecting only some keys, or an implementation error

  13. Looking forward, is there a need for a mechanism that can harness true randomness (not pseudo randomness) to maintain security? For example, ideas as presented in a recent book, Dynamic secrets in Communication Security (Springer)? The idea of the book is to extract randomness from the environment and use it to refresh keys so that a third party (NSA) cannot keep up without either a major expenditure of resources or a greater chance of being detected.

  14. The picture posted in the blog with yellow lines is very interesting. One of the last goals of NSA for this year 2013 is to: 'Shape the worldwide cryptography marketplace to make it more tractable to advanced cryptanalytic capabilities being developed by NSA'.

    Are we to wait some journalists to praise questionable crypto products? What are NSA's tools in this game?

  15. Oh for Pete's sake, you want to have an encrypted, secure discussion? Get off the grid. Don't use the internet and/or any type of electronic transmission. Go back to good old face-to-face communication, preferably in a quiet, out of the way place, maybe with coffees or teas or, better, ice cream.

  16. The concept of “true randomness” is as logically slippery as, say, “fighting for peace.” More formally, there are n definitions of “random” – with n being an integer greater than one. Substantially greater… and increasing monotonically over time.

    Can't we just agree that we're talking about “entropy?” No – same problem. The quantum folks mean one thing, information theorists another, statistical mechanists another, and so on. It's not that one is “right” or “true” – it's that they're referencing subtly different concepts.

    And, no, it doesn't at all seem probable (in this one systems theorist's mind, fwiw) that there's an eventual convergence – a Grand Unified Theory of the Random – lurking out there. In this, I'll cite very smart folks like Seth Lloyd, Gregory Chaitin, and Stephen Wolfram: all (more or less) accept that “random” is a squishy, variable concept.

    Cryptographers – such as our esteemed host here, Dr. Green – pine for entropy (“entropy”) & randomness as a foundation of viable key construction. From the perspective of systems theory, what I'd say is that they're primarily concerned with ruling out data sources that can be shown explicitly to be _nonrandom_ in a precise way. That definition works just fine in this application – but it certainly doesn't generalize.

    Years ago, a professor of mine made this point by stating that a set of data are not random if there is an algorithm smaller than the data themselves that can produce those data. How do we know whether there's such an algorithm? Well, if we can discover it – somehow – then we know it exists. If we don't discover it… perhaps it still exists. Or not. This is the quintessential incompleteness finding, or if you prefer a pure-form example of Turing's halting paradox.

    (if you want to talk about encoding the algorithm for creating “nonrandom” data, you'll want to look to Algorithmic Information Theory… which, perhaps not surprisingly, ends up being a likely candidate for where to look for post-quantum crypto inspiration – but I digress)

    For just one turtle in the turtles-all-the-way-down story of how “true randomness” embodies logical oxymoron, see:


    Deeper dives are to be found in Wolfram's ANKoS and, per this cite, Chaitin's work:


    I could go on, but I'll spare readers with less desire to dive down this particular rabbit hole. It's perhaps sufficient to acknowledge that randomness is a scalar, not binary, variable (actually not even that – more of a fuzzy matrix of systemic attributes, etc.)

    It's easy to do random number generation wrong – provably, catastrophically so (both unintentionally & via sneaky introduction of non-erroneous “errors” by nefarious agents) – but holding out for “perfect” randomness is a good way to make oneself crazy, in the ontological sense of the term.

  17. There are any number of ways to generate n very random looking bits that take way less than 2^n guesses to find. The trick is doing it without getting caught.

    Unfortunately c & c++ are *very* easy languages in which one can obfuscate code. All it takes is a single binary AND *anywhere* (before the last hash) in the key generation or random number generation to 0 out any number of bits the attacker wishes.

  18. Google embeds it's own public keys (or there hashes) in chrome, that's how they found the Iranian gmail mitm attack that used a cert from the comodo hack. so that won't work for the NSA.

    PPTP is worth mentioning – we've known mschapv2 was useless for a while, see cloudcrack.com

  19. @Kenny: A CA has no business (or should not have any business) knowing a RSA private key (i.e. decryption key) of a website (other than maybe their own website, though this is normally a different part of the company).

    When a CA generates the key pairs for their clients before signing the public one, storing the private one (and transmitting it to NSA) would be possible, though. But then the user should be aware of this.

  20. Except it doesn't because in later podcasts he corrects himself stating that the way he thought the exploit worked wasn't entirely correct and that you didn't need to pass an unexpected value to make it work. Basically, everything he asserted in that podcast wasn't true.

  21. Other than the ones already detailed in the documents, the only one I can think of is to publicize fatal flaws in a cipher scheme or implementation, encouraging the use of ostensibly stronger alternatives with privately known weaknesses. This would be consistent with some commentators' suspicions that Snowden's disclosures could be part of a limited hangout operation (and this is all a bid to get us off of RSA for nefarious reasons), but the consistently strident tone of those trolls advancing that theory leads me to believe otherwise.

  22. Great article, thank you very much. May I make a request/suggestion? I was reading this sentence: “Part of this is due to the fact that it's a patchwork nightmare originally developed by a novice who thought it would be a fun way to learn C.” Seeing as the second half of the sentence is a link, I was very interested to click through to see what I assumed was supporting documentation. I was disappointed instead to see that it was just a tweet from you saying pretty much the same thing! Allow me to suggest that is not, in fact, a good use of a link… perhaps you learned this little nugget in personal conversation with the author? If so, by all means, say so; but if not, perhaps you could link instead to another source that documents the claim. I know this a nitpick, but I was sort of looking forward to clicking through and reading🙂

  23. Quantum Computing on an industrial scale ?

    Seriously… Tinfoil hat is off.
    Is quantum computing the development that has given them the ability to decrypt previously secure encryption on the fly ?

    Throughout the revelations, there was still “mystery tech”. Untold because…
    1) The methods are illegal/problematic.(compromising systems/ infiltrating endpoints)
    2) New capabilities that are secret in their own technical right.

    (1) we know is true.
    (2) ????????

  24. “** The major exception here is Google, which deploys Perfect Forward Secrecy for many of its connections, so key theft would not work here. To deal with this the NSA would have to subvert the software or break the encryption in some other way”

    ITYM key theft of the server's private key alone would not work. However, poorly constructed client private keys are vulnerable to either theft via maliciously installed software, or possibly other mathemtatical means.

    Karl Malbrain

  25. A web browser does not check whether a site's certificate has changed, only that it is validly signed by a Certificate Authority. If the NSA has control of one or more Certificate Authorities, which is extremely likely, then it can routinely perform MITM attacks on traffic between browsers and web sites – even ones whose true certificate is signed by a different Certificate Authority.

  26. “The Times wrote:

    To conduct the surveillance, the NSA is temporarily copying and then sifting through the contents of what is apparently most e-mails and other text-based communications that cross the border. The senior intelligence official, who, like other former and current government officials, spoke on condition of anonymity because of the sensitivity of the topic, said the NSA makes a “clone of selected communication links” to gather the communications, but declined to specify details, like the volume of the data that passes through them.”

    You're right — looks like an MITM attach is being described. Missed this the first read.

  27. Is this the blog that JHU wanted taken down? I would be highly curious to know who inside this educational institution would practice such a rank and base form of censorship.

  28. You say business may just go “back to usual”. I don't see how that's possible, since commerce is one of the fundamental areas that are impacted by this news. I manage the capital markets practice at a mid-size hedge fund. You better believe we're transitioning away from any closed-source U.S.-based commercial applications and accounts. The larger funds already do most everything in-house, but you'll see smaller and smaller funds choosing open-source and bespoke solutions going forward. It's too bad the NSA forgot it's original mission of *safeguarding* U.S. business communications instead of undermining and weakening them. Now, all that tech business that has driven the economy the last 20 years is going to go elsewhere.

  29. The NSA is also saturated with people in uniform. The only difference is that they do not wear their uniform. The US and OK are not far behind Russia, China and North Korea these days.

  30. Since servers do not externally authenticate clients, a stolen server key allows MITM even with PFS. Such active attacks are more risky, however, since a colluding client and server can detect it.

  31. And what happens when there is ubiquitous surveillance along with sufficient computing power to automatically create computer-analyzable transcripts of the conversation?

    Live in the woods, completely off the grid?

  32. Microsoft probably has the most 'splaining to do, although there is plenty to go around. We always knew the telco's are complicit.

  33. Yes, the parameters in ECC are the coefficients of the elliptic curve. In the NIST case, they are generated by hashing mysterious constants, presumably supplied by NSA. But since they are are hashed, there is a limit to what they can do.

  34. Security “experts,” generally speaking, know the distinction between information that's sensitive and information that's not.

    Dr. Green is a public figure, writing in his own name and teaching publicly at a non-secret university. Employing clever tricks to “secure” this site against folks finding out that he owns, or runs, or contributes to it would be sort of silly. Yeah, he certainly could stick it on a server buried within Tor hidden services, require folks to find him on bitmessage to gain the address, and confirm visitor authenticity using some clever implementation of blockchain validation or whatever.

    Point being: what's the point?

    It's security amateurs who tend to engage in theatrical, skiddie-style displays of “l337 skillz” like paying for registrant obfuscation on their .com TLD with NSI. That sort of thing might impress a certain class observers, but it has nothing to do with the guts of genuine security debates.

    A big part of the battle – for those who really do live and die based on security implementation decisions – lies in finding clarity as to what is sensitive and what is not. Anyone who is trying to keep everything private all the time is going to be spread thin relative to someone who chooses what matters and focuses her efforts primarily on those areas above all else.

    There's very few academic practitioners in Dr. Green's category of competence who are willing to stand up in public and offer the kind of cogent, deeply-sourced, hands-on advice he's providing here on this blog. For free. Undoubtedly, there's second-order costs he carries in sharing this kind of knowledge broadly on a public platform – costs which most folks would avoid simply by staying silent in the face of official intimidation. Which is to say: cut him some slack on the troll-ish, vapid criticisms eh?

    Many thanks🙂

  35. I'd love to know (but I doubt we ever will), how much irony was involved in picking those codenames.
    I'm sure the higher ups were told that they refer to civil war battles, nothing more, but I can't help wondering if some drone in the NSA/GCHQ was having a little joke when they picked the name…

  36. I'd love to see an organization like Google, for instance, stand up and tell the NSA to that if they want a fight, then, in the words of George Bush, bring it on. You have your 10000 hackers, we have ours. May the best man win. Like Apple vs Samsung, it takes a group with the same financial resources, manpower, and talent to go head to head with an adversary like the NSA, and Google is probably the best candidate. Sergy Brin and Larry Page are libertarians at heart, and you can bet they have no love of the NSA and the games they're playing. They should have a press conference and say straight away that within 36 months, the NSA will be back square one. Put 'em on call. Just my 2 cents.

  37. I'd like to see it, too. I have a feeling, though, that the NSA doesn't play by any rules (at least not ones they don't like) and they would end up tossing this “google not cooperating” problem over the virtual cubical wall to the DOJ/FBI. Next thing you know, a SWAT is guy doing some neck-surfing on Larry while he gets handcuffed and hauled away. Like Louis XIV who had “Ultima Ratio Regum” engraved on all his cannons, the Feds have the trump card: physical coercive force.

  38. “Not only does the worst possible hypothetical I discussed appear to be true, but it's true on a scale I couldn't even imagine. I'm no longer the crank. I wasn't even close to cranky enough.”

    I don't know about right wing politics, but if you knew anything about left wing politics, you would know that statement has been made many, many times before – like the 60s antiNam activists who tried , but failed, to weed FBI informants out by making everyone take LSD

    Or, as the old lefy joke goes, picture of two cows
    Cow1: I just discovered how they make hamburger !!!
    Cow2: you leftist and your conspiracy theorys…

    another oldy but goody: one day I'm walking in downtown brooklyn with my granddad, a lawyer, connected (it is NYC) and he says, see that building (pointing to large office building)
    the 5th floor is full of people doing illegal wiretaps

    also from NYC: about every 10 or 15 years, going back to the 1920s at least, it is found that the nyc police dept has a secret squad, usually antileft, but now antimuslim/terrorist (the police can't distinguish) that is doing all sorts of illegal snooping, and every 10 or 15 years, the police dept signs a consent decree..

    in other words, you naivete is even greater then you think !!

  39. I don't think that a new certificate is required. All the NSA needs is the certificate's private key to insert itself actively into the communications channel. They can just use the existing public certificate as their own.

    Passive attacks are another story.

  40. Great to see this post up again. What do you think, Mr Green, about using a cryptlib-based SSL suite rather than OpenSSL? http://www.cs.auckland.ac.nz/~pgut001/cryptlib funny, but I cannot seem to get through to this article he wrote Sept 1 of this year… even the cached version seems down.

    This is Google's cache of http://www.meganews.co.nz/watch/little-brother-more-threat-privacy-dr-peter-gutmann. It is a snapshot of the page as it appeared on Sep 1, 2013 16:20:47 GMT

  41. First principle of keeping secrets: minimize the number of secrets.

    Don't bother to keep anything secret if you don't have to.

  42. Do you think perhaps a new language should be invented to help implement crypto algorithms so that it's easier to tell if an algo has been tampered with?

  43. CA's don't work that way. An intermediate certificate is part of a chain of trust.

    Even in the case where NSA is running a CA they do not have access to private keys. they can only redirect traffic and impersonate a site. Even then, they have to sign new certificates with that trusted cert.

    Many sites, like Google, Paypal, and Twitter implement certificate pinning to defeat this sort of attack.

  44. It drew me balmy working on the OpenID standards, because pretty much every security improvement I suggested got immediately dismissed, or worse, totally ignored. At least now I can blame it on something other than the rank ignorance of those in power.

  45. Here's a fun trick – pretend you have to put a backdoor into an open-source product… how do you go about it? It's been a long time since I played that game, but I designed my back door around exception handlers, making it look like a mistake – so the exploit is not in the place you'd go looking for it, and it's not obvious either.

    Every “patch tuesday”, I ask myself – “I wonder how many of those “mistakes” were real “mistakes” :-)”

    I tell you what else I wonder every patch-tuesday… where are all the OS/X updates?

    How can Microsoft fix a dozen f*kups every week, yet apple fixes nothing for months and months?

    Actually – the thing I literally think every time I see any update (think: Java/Adobe/etc) – “What code, in all this new privilege-granted-stuff I'm installing, has been put there deliberately to do tasks other than what I expect?” Think: hackers stealing keys, NSA installing backdoors, or anyone covering tracks of their previous exploits…

  46. Here's an even funner trick – pretend you have to put a backdoor into a cipher or hash standard/algorithm. How would you do it?

    Here's how I would do it.

    First – lets consider RSA – you can scramble stuff using a seemingly big random number (private key), to product output gibberish, but then, using a different random number (public key), you can reconstruct the original message from the gibberish. Cool. You could even go “backwards” – if you feel the need to manipulate the input stuff to derive predictable output gibberish. All you need, is some prime numbers, and neato maths. If you go ahead and code this stuff in assembly language, all those numbers and algorithms basically end up being loops of stuff that do bitshifiting and additions.

    So – lets pretend we work for the NSA, and our job is to build something like SHA-1 (so we can crack it), but which the world thinks is secure. It's got to somehow hide a key (based on prime numbers) in there, and to perform the general elements of an asymmetric crypto operation (loops of bitshifting and additions), all while appearing to be innocent.

    So – lets “hide” our assymetric key by calling it an “initialization vector” (IV), and we'll spin a bogus story to go with it – we are basically trying to convince people who don't trust us (with reason), that we're trustworthy, so lets use that idea to our advantage. We'll pretend that it is impossible for us to pick “random numbers” for the IV, because nobody would know that we didn't pick special numbers that give us advantage, so lets come up with something else, that creates stuff that looks sufficiently “random” to everyone, but which it appears we didn't manipulate. Some “Nothing up my sleeve numbers”, as it were. Hmm. But it has to be related to prime numbers somehow still. What can we use??? Yeah – shit – maybe I'm just an NSA intern, so bugger it – how about I just use actual prime numbers. Sometimes the easiest way of hiding stuff, it to stick the damn things right in plain view. Lets go chat to the pure-math research guys, and see what kinds of interesting properties they've discovered while messing about with prime numbers. (time passes). OK – here's a good one – “the fractional parts of the square roots of the first few prime numbers” can do some really obscure and groovy things. Bingo.

    No all we've got to do is hide all those bitshifts and additions, so they don't resemble the original assymetric algorithm we're building. Hey – that's easy – write the original code, unroll it, and write some more code who's job is to re-roll that stuff into something innocent looking. Hell – if you're too lazy to do that part, just write a genetic algorithm to try and find the answer for you, and leave it running on a supercomputer for a few months.

    Cool – done.


  47. The presentation explains how data is intercepted, through an attack known as “Man in the Middle”. In this case, data is rerouted to the NSA central, and then relayed to its destination, without either end noticing.

    A few pages ahead, the document lists the results obtained. “Results – what do we find?” “Foreign government networks”, “airlines”, “energy companies” – like Petrobras – and “financial organizations.”


  48. How can passive wiretapping of https traffic be achieved?
    A trick in the cryptanalysis of the key exchange (insert you ecc blame/paranoia here) would do the trick or they would need more? and what that would be?

  49. Great so the NSA just scopes the data from google's servers. It really is irrelevant when google grants the NSA access to the servers.

  50. I don't understand how Forward Secrecy applies here. If your adversary is capable of recording all network traffic then they can reconstruct the ephemeral key exchanges once they break the historical private key. Maybe I just don't understand what is meant by Forward Secrecy. I have always assumed all symmetric keys used after the handshake were random anyway. Why would they be stored or predicable?

  51. NSA collects everything it can.

    Encrypted data can easily be directed to SSL MITM servers where appropriate keys for all major SSL providers exist for live MITM attack, in which case forward secrecy is of little help.

    NSA is working with hundreds/thousands of companies.

    UK, Australia, Canada, New Zealand, Israel, Sweden have full access to all the raw NSA data about US citizens.

    Consider the possibility that modern notebooks/smartphones have microphones active 24/7 recording each and every word you say, converting it to text and sending to NSA, to be stored forever.

  52. I would be astounded if open-source efforts such as GnuPG and TrueCrypt aren't long since compromised.

    Yes, exposing crypto code to review by the public is a very good thing, but just how often do those programs (especially the inner working parts that you wouldn't think need to get changed very often) actually get gone over with a fine tooth comb by somebody good enough to catch a backdoor some NSA mole has snuck in? Remember, no open source project even tries to 'vet' contributors (and I doubt if such a project could function if they did). Even the big projects like Mozilla only require that each new piece of code be reviewed by someone else, and it wouldn't be too hard to get another mole to do the reviewing.

  53. You don't even mention selinux which is heavily influenced by the NSA and in every Linux kernel since 2.6

    In fact these days its hard to extricate yourself from selinux.

    Can we trust that putting “selinux=0” on the kernel line in grub really disables the compromise the NSA built into selinux?

  54. I think you overlook one more component under “people”: Why wouldn't NSA employ people to engage themselves in open source projects (OpenSSL, Linux, Apache, whatever) without their NSA affiliation being disclosed ?

    There are so many ways you can reduce the cost of cryptographic attacks, if you can get the right “sub-optimal” code into the codebase.

    I wrote about this in ACM Queue: http://queue.acm.org/detail.cfm?id=2508864

  55. The problem with Google's PFS is that they tend to use elliptic curves for exchanging the ephemeral keys. Using curves published by NIST. Nobody knows how secure these actually are; the NSA once managed to covertly strengthen DES (differential crypto), so deliberate weakening of EC crypto by way of selecting curves with special properties which nobody else has discovered yet is rather probable.

  56. Programming languages (like C especially) are a major problem for security. All one has to do is look at the “Underhanded C competition” to understand why this is. A clever programmer can hide backdoors right in plain site and make them extremely difficult to detect even by other experts who carefully examine the source code. And these people *know* there is a backdoor to be found and yet it still proves difficult to find it.

    Things become even worse when dealing with crypto code. There are a 1001 ways to subvert a crypto system with a backdoor (bad RNG, weird constants, subtle changes in protocol design, partial key leakage, etc). Any one change is enough to defeat the whole system and make it completely useless even though things appear to be working as intended. The problem with crypto programming is that when it fails, it usually fails *silently*.

    A crypto system can be bad even without any obvious signs of a “bug” in the code. RNG's, especially, are susceptible to this sort of tomfoolery because even a bad RNG can have its output ran through a battery of tests which will all give it a clean bill of health. As most of you know, there is no real way to “test” an RNG after the fact. The only thing you can test for are *obvious* signs of failure. A clever adversary will not make any such failure obvious (and the adversary doesn't even have to be that clever). So to detect a bad RNG, you have to look at its design from the ground up.

    And to make things even *worse*, let's assume that every line of code is correct and properly implemented. Who's to say that the “standards” we use are secure? Even if the code is perfect, if the standards and protocols it is implementing have been “subverted” by the NSA, it makes no difference. One week ago, anyone who suggested this would be labeled a tin-foil-hatter and chuckled at by his colleagues. However, the leaked documents make it clear that this is precisely what NSA is doing. (See Dual_EC_DRBG)

    So, as Dr. Green said, these revelations are going to be an absolute nightmare for the IT security industry. Even if we have the world's leading experts go line by line in the code of the most popular crypto libraries, there is still no guarantee that nothing is amiss. It's going to take a complete verification (with a very skeptical eye) of every standard and protocol out there. Any potential “up my sleeve” number should be looked at with derision and thoroughly examined (Schneier thinks that the suggested NIST ECC curves are probably compromised by NSA using “up my sleeve” constants). This is why I think we all should embrace DJB's curve25519. Let other experts examine it, debate it and come to a consensus about its safety. It's certainly better than trusting NIST and I have no reason to believe DJB is an undercover spook.

    In fact, I will go a step further: I think the crypto community at large needs to form its own unofficial standards body made up of academics from around the world. Let's ditch NIST and start over.

    All of this is a lot of work that is probably, quite frankly, not feasible. NSA has hundreds of top cryptologists who get paid to do this stuff. Academics usually have no such luxury. They either do it out of the goodness of their hearts or do it for research grant money.

    The bottom line is you should not rely on any digital form of communication to be secure — unless you are lucky enough to be an NSA employee who has access to the cutting edge designed by people with no “underhanded” intentions. The NSA cryptologists can generally trust each other. The same cannot be said about “public” cryptology.

    So, Bruce Schneier is right, it's all about trust. You have to ultimately trust someone somewhere eventually. Whether it's your hardware, OS, crypto library or NIST. As Bruce says, “we are just going to have to play the odds.”

  57. There is little to no security industry in Russia except a) serving excessive compliance needs mandated by FSB and FSTEK b) serving “lawful interception” needs (DPI, data mining and stuff), as well being excessive.

    Nothing domestic is worth attention here outside these two categories.

  58. I keep asking something but gets deleted.. Would that hint of yours mean that if the ecc key exchange is “bugged big time” then someone could sniff passively the ephemeral symmetric key between the server/client?

  59. Schneier's and other academic people claims are pityful. They should just follow most works presented in hacking conferences (Black Hat, defcon, CCC, CanSecWest…). Most of the attacks used in BullRun are knwo since a very long time and the control over the cryptography is enforced for years. have just a look at http://www.concise-courses.com/infosec/20121220/# (a talk formerly presented at CanSecWest 2011) and at https://www.hackinparis.com/talk-eric-filiol. you will see that the academic world and decision-makers are just blind and deaf

  60. No one is saying the NSA attacks are somehow new. I think most people are just surprised by the scope of all of this and that they have been successful without being caught. Of course, there is a good chance they have “sources and methods” that are unbeknownst completely to the world at large.

    I wouldn't be surprised if they have cryptanalyzed various ciphers to a degree that makes them practical to break somehow (perhaps a room full of supercomputers are still needed, but nonetheless practical for NSA and its endless budget). This sort of verifies James Bamford's claim that NSA needed the Utah center because they needed faster (and more) supercomputers to do the requisite cryptanalysis. He also said they made a “huge breakthrough” and that the NSA cryptographers were saying they really needed a new facility to help them realize these theoretical results. The recently disclosed NSA documents verify precisely what Bamford's sources told him (with James Clapper himself actually saying NSA made a “breakthrough” as he asked Congress for money).

    All of this suggests they have reduced the complexity of a cipher or group of ciphers somehow (my bet is on public-keys as opposed to block ciphers). I don't think a “huge breakthrough” implies that they have merely pulled off a MITM attack or stolen some private keys. That sort of thing is routine and wouldn't be anything for anyone at NSA to consider a “breakthrough.” Also, when the GCHQ guys were briefed about the details they were “Gobsmacked.” GCHQ is full of professional cryptologists, so this, once again, implies something ground-breaking.

    Of course, considering that the world uses ciphers and hashes (SHA) designed by NSA and pushed on us by NIST, a breakthrough wouldn't be surprising. This really casts a lot of doubt on everything NIST has done over the years. Is AES somehow weak? It seems unlikely as the community at large voted on and selected it (even though NIST had the ability to override the vote if they so desired). But what we do know is that AES is very, very bad when it comes to side-channel attacks. Perhaps NSA knew this and was secretly pumping their fists in anticipation, which is why they didn't override the vote.

    It also calls into question keccak, the recently selected SHA-3. Was there a vote held for this algorithm by the community at large or was it merely selected “internally” by NIST? Keccak does seem like an impressive design and NIST's reasoning for selecting it makes sense, but considering what we now know about NSA “nudging” one has to view everything NIST does with a skeptical eye.

  61. “Prof. Matthew Green was forced to remove the NSA logo from his blog post. He replaced it with a photo from the German Movie 'Das Leben der Anderen' depicting an Eastern German Stasi officer eavesdropping on innocent citizens. Prof. Green, that is so subtle, yet so classy. You are my hero of the day!”

    “The Lives of Others” http://www.imdb.com/title/tt0405094/ finely-crafted historical drama, prescient cautionary tale, and training film…

  62. Symmetric cipher is basically DES on steroids, digital signatures algorithm is ECDSA with a few signs changed. Stribog (new hash algo) looks a little bit more interesting though.

  63. There is a standard problem: even if it will be a version 2.0, it should allow TLS 1.0/1.1/1.2 (and even SSL 3.0) clients to connect and work with the service. Otherwise nobody will connect to your server. Nobody will be able to use your data.

  64. I am pretty sure, the large open source projects like GnuPG or TOR are clean(I don't know what to think of truecrypt though).
    Presently I myself am in the situation of contemplating inviting other volunteers to contribute to my own open source crypto project.(google:”open source elliptic curve cryptography”-> Academic Signature). In fact the code range that is critical for security is limited in size and you bet I know where this is. In Academic Signature e.g. it is 3 modules out of about 30. This is most certainly similar in GnuPG or TOR.
    You can be sure that every patch supplied, that would get just close to the critical parts would be scrutinized more than just heavily. Actually there would be no need for patches in my project😉 Even if it were and someone else would supply it(extremely unlikely!) there are tools like “filediff” or “meld” that highlight code differences and anything getting too close to e.g. the PRNG would sound red alert!
    I am sure Werner Koch does this for GnuPG with full determination also and protects the code backbone like his eyeballs. If one person is in charge it all depends on the trustworthyness of that one person. There may be moles in larger projects, but they cannot just “change the code”. This assumption is naive.

    After all the main tendency for crypto developers is to get more and more paranoid during the development process! You have to fight paranoia and rather struggle to retain some level of trust into others😉

    By the way – those who don't trust NISTS elliptic curve parameters can use mine. At this url you can get curve parameters for up to 1024 bit size(Nist will only give you up to 521):
    They are free for noncommercial use. (I dont't think the NIST domains are insecure though, but don't use the short ones!).

  65. This business about SSL is damn bad. Govt can take down anyone at will now.

    It's amazing that a tenured professor has to self censor his scientific facts and findings based on (not wanting to) “appearing like a conspiracy nut”!

    what does that make of us concerned, lesser, citizens who don't have the science but share the concerns? worse conspiracy nuts!

    ….and i was freaked out (years late it turns out) about Intel's “HyperThreading”….

    who knew, hyperthreading allows one thread to monitor and share the entire data structure of the other thread on the same core?!? Linuxes don't implement hyperthreading (in any case the advantage is roughly 10-25% i've heard). Can u imagine!!! outrageous. what other traps are hardwired?

  66. Interesting!

    Steve Marquess of OpenSSL Software Foundation explained well and in an open matter. He says: “I can shed some light, at least regarding the implementation of the OpenSSL FIPS Object Module.
    The Dual EC DRBG was implemented at the request of a paying customer[*]…

    [*]Who was the customer? OSF is bound by some 200 separate NDAs (Non-Disclosure Agreements)…”

  67. Hi Chris,
    Intels hyperthreading or the analogous technique by AMD are very useful tricks to speed up processing. On modern computers the bottleneck for longnumber arithmetics e.g. is not processor speed but RAM access times. (I know this since I develop crypto code.)
    Thanks to these tricks writing fast code is not so much about minimizing processor work but is more like optimizing a tango with cash rings and memory access incidents.
    Regarding the side channel attacks: If on your PC in your office the NSA managed to run a concurrent thread on your processor, a side channel attack is the least of your problems. It always all depends on your system being “clean”. If this is not the case, you can safely eradicate the word security from your lexicon.

    Different threads always share memory range, there would be a much larger overhead otherwise. And yes Linux is of course using this. Indeed on a Linux, processing can be sped up substantially by using four paralell threads despite only having two cores.
    Linux is about twice as fast as windows7 with my own longnumber arithmetics – I always kept wondering what windows7 does in all this extra time (XP is running the same speed as Linux!?!) Is it calling home😉


  68. MG: “All of these programs go by different code names, but the NSA's decryption program goes by the name 'Bullrun' so that's what I'll use here.”

    “Bullrun” is the decryption.
    And in the Intel-Link you provided, they say:
    “Intel Secure Key, was previously code-named Bull Mountain Technology”

    Does “Bullrun” mean to run it *against* “Bull Mountain Technology”, or does “Bullrun” mean *to* *run* “Bull Mountain Technology”?

  69. There already is – after some crypto-hype-for-the-masses – a backlash, that says, using crypto is even more dangerous than using plaintext. (e.g. “pgp-keys web-of-trust on keyservers brings good possibilities to analyze social networks… so better avoid encryption”)

    Also, the talk about compromised SSL might be a motivation to use non-encrypted data transfer, because if encryption does not work, but eats the performance of the computer (weak implementations? by accident or wanted?), then it's time to switch it off…

    If NSA can read anything, let's talk in plaintext….
    …it also looks less suspiciuous…

    So, this might be a effect of the topics (by accident? wanted?)

    …but some people nevertheless might switch to encryption.

  70. “Of course, considering that the world uses ciphers and hashes (SHA) designed by NSA and pushed on us by NIST, a breakthrough wouldn't be surprising. This really casts a lot of doubt on everything NIST has done over the years.”

    WTC Desaster Study?

    Possibly WTC was built on elliptic curves too…
    … maybe thats why the study didn't found anything to be normal.

  71. Hello Prof Anders,

    I am not an expert, but i think “hyper”-threading (ie Intel's) was discovered (http://www.daemonology.net/hyperthreading-considered-harmful/) to leave important data in the shared core cache for open reading after “hyper” threading was invoked!

    That's why I believe Linuxes avoid “hyper”threading.

    the regular OS threading (implemented by Op Sys), however continues apace in Linux. If i have it correct.

    Aside: It's not just NSA to blame alone, but titans of (dot-com) industry (clearly intel and cisco too) are a little too easily and eagerly supporting all this spying IMHO. The entire “Dot Com Complex” has an ambition in this matter not any bit less than the old military industrial complex's ambition to do any govt bidding. Money is king.

  72. Hello Kris,

    thank you for the info. I hadn't been aware of this flaw in intels hyperthreading. Indeed you suspected right I had been thinking of the normal multithreading and didn't apply due diligence in my response.
    Thanks again.

  73. Recap:

    Pretty much all proprietary hardware including CPUs have been backdoored via government coercion and bribes

    TLS/SSL is still complete junkware and pwned by an agent clicking a button

    All phones are backdoored, or will be. The baseband stack is running in supervisor mode with no NX bit anyways wide open to attack

    They've been photographing every piece of mail and keep it in a database forever for meatspace metadata

    Drones are being used on us

    Crypto standards are now suspect, NIST has less credibility than Saddam's Information Minister

    The intel agencies of 5 countries are fully rogue and accountable to nobody including those countries' executive branch leaders.

    Compilers and toolchains can't be trusted as open source projects are either purposely infested with agents or just plain incompetent/don't care enough to do deterministic building when releasing binaries


    We are 12 minutes to doomsday on the Totalitarian clock

  74. If NSA is deliberately compromising American-made cryptography, crypto that US government agencies and businesses rely on, then it is only a matter of time before this backfires, and the wrong “Eve” gains an entry that decimates US wealth in a big way. (For all we know, this has happened already but NSA has managed to disavow responsibility.)

    That's when things will begin to change for the better: only when the blowback eventually scorches the high echelons of power, hubris, and military “intelligence”. Wait for it. Things will get much worse before they get better.

  75. They are clearly not accountable to the principles of the nation that pays for them. I call that biting the hand that feeds you. And there are consequences. For that reason they should be tossed out with the trash. We got along fine without them before.

  76. cryptography and encryption is a popular word nowadays pertaining to nsa's activity to retrieve information and its only going to get worst knowing that just to comment upon a blog u are being watch. The paranoia that creates just thinking how much ways they create just to spy on people then uses the word terrorist to make It seem ok.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s