A history of backdoors

 

Credit
The past several months have seen an almost eerie re-awakening of the ‘exceptional access’ debate — also known as ‘Crypto Wars’. For those just joining the debate, theTL;DR is that law enforcement wants software manufacturers to build wiretapping mechanisms into modern encrypted messaging systems. Software manufacturers, including Google and Apple, aren’t very thrilled with that.
The funny thing about this debate is that we’ve had it before. It happened during the 1990s with the discussion around Clipper chip, and the outcome was not spectacular for the pro-‘access’ side. But not everyone agrees.

Take, for example, former NSA general counsel Stewart Baker, who has his own reading of history:

A good example is the media’s distorted history of NSA’s 1994 Clipper chip. That chip embodied the Clinton administration’s proposal for strong encryption that “escrowed” the encryption keys to allow government access with a warrant. … The Clipper chip and its key escrow mechanism were heavily scrutinized by hostile technologists, and one, Matthew Blaze, discovered that it was possible with considerable effort to use the encryption offered by the chip while bypassing the mechanism that escrowed the key and thus guaranteed government access. … In any event, nothing about Matt Blaze’s paper questioned the security being offered by the chip, as his paper candidly admitted.

The press has largely ignored Blaze’s caveat.  It doesn’t fit the anti-FBI narrative, which is that government access always creates new security holes. I don’t think it’s an accident that no one talks these days about what Matt Blaze actually found except to say that he discovered “security flaws” in Clipper.  This formulation allows the reader to (falsely) assume that Blaze’s research shows that government access always undermines security. 

It’s not clear why Mr. Baker is focusing on Clipper, rather than the much more recent train wreck of NSA’s ‘export-grade crypto’ access proposals. It’s possible that Baker just isn’t that familiar with the issue. Indeed, it’s the almost proud absence of technological expertise on the pro-‘government access’ side that has made this debate so worrying.

But before we get to the more recent history, we should clarify a few things. Yes: the fact that Clipper — a multi-million dollar, NSA designed technology — emerged with fundamental flaws in its design is a big deal. It matters regardless of whether the exploit led to plaintext recovery or merely allowed criminals to misuse the technology in ways they weren’t supposed to.

But Clipper is hardly the end of the story. In fact, Clipper is only one of several examples of ‘government access’ mechanisms that failed and blew back on us catastrophically. More recent examples have occurred as recently as this year with the FREAK and LogJam attacks on TLS, resulting in vulnerabilities that affected nearly 1/3 of secure websites — including (embarrassingly) the FBI and NSA themselves. And these did undermine security.

With Mr. Baker’s post as inspiration, I’m going to spend the rest of this post talking about how real-world government access proposals have fared in practice — and how the actual record is worse than any technologist could have imagined at the time.

The Clipper chip

 image: Travis Goodspeed
(CC BY 2.0 via Wikimedia)
Clipper is the most famous of government access proposals. The chip was promoted as a ubiquitous hardware solution for voice encryption in the early 1990s — coincidentally, right on the eve of a massive revolution in software-based encryption and network voice communications. In simple terms, this meant that technologically Clipper was already a bit of a dinosaur by the time it was proposed.

Clipper was designed by the NSA, with key pieces of its design kept secret and hidden within tamper-resistant hardware. One major secret was the design of the Skipjack block cipher it used for encryption. All of this secrecy made it hard to evaluate the design, but the secrecy wasn’t simply the result of paranoia. Its purpose was to inhibit the development of unsanctioned Clipper-compatible devices that bypass Clipper’s primary selling point — an overt law enforcement backdoor.

The backdoor worked as follows. Each Clipper chip shipped with a unique identifier and unit key that was programmed by blowing fuses during manufacture. Upon negotiating a session key with another Clipper, the chip would transmit a 128-bit Law Enforcement Access Field (LEAF) that contained an encrypted version of the ID and session key, wrapped using the device’s unit key. The government maintained a copy of each device’s access key, split and stored at two different sites.

To protect the government’s enormous investment in hardware and secret algorithms, the Clipper designers also incorporated an authentication mechanism consisting of a further 16-bit checksum on the two components of the LEAF key, further encrypted using a family key shared between all devices. This prevented a user from tampering with or destroying the LEAF checksum as it transited the wire — any other compatible Clipper could decrypt and verify the checksum, then refuse the connection if it was invalid.

A simple way to visualize the Clipper design is to present it as three legs of a tripod, (badly) illustrated as follows:

The standout feature of Clipper’s design is its essential fragility. If one leg of the tripod fails, the entire construction tumbles down around it. For example: if the algorithms and family keys became public, then any bad actor can build a software emulator that produced apparently valid but useless LEAFs. If tamper resistance failed, the family key and algorithm designs would leak out. And most critically: if the LEAF checksum failed to protect against on-the-wire modification, then all the rest of would be a waste of money and time. Criminals could hack legitimate Clippers to interoperate without fear of interception.

In other words, everything had to work, or nothing made any sense at all. Moreover, since most of the design was secret, users were forced to trust in its security. One high-profile engineering failure would tend to undermine that confidence.

Which brings us to Matt Blaze’s results. In a famous 1994 paper, Blaze looked specifically at the LEAF authentication mechanism, and outlined several techniques for bypassing it on real Clipper prototypes. These ranged from the ‘collaborative’ — the sender omits the LEAF from its transmission, and the receiver reflects its own LEAF back into its device — to the ‘unidirectional’ where a sender simply generates random garbage LEAFs and until it finds one with a valid checksum. With only a 16-bit checksum, the latter techniques requires on average 65,536 attempts, and the sender’s own device can be used as an oracle to check the consistency of each candidate. Blaze was able to implement a system that did this in minutes — and potentially in seconds, with parallelization.

That was essentially the ballgame for Clipper.

And now we can meditate on both the accuracy and utter irrelevance of Mr. Baker’s point. It’s true that Blaze’s findings didn’t break the confidentiality of Clipper conversations, nor were the techniques themselves terribly practical. But none of that mattered. 

What did matter were the implications for the Clipper system as a whole. The flaws in authentication illustrated that the designers and implementers of Clipper had made elementary mistakes that fundamentally undermined the purpose of all those other, expensive design componentsWithout the confidence of users or law enforcement, there was no reason for Clipper to exist. 

SSL/TLS Export ciphersuites: FREAK and LogJam

This would be the end of story if Clipper was the only ‘government access’ proposal to run off the road due to bad design and unintended consequences. Mr. Baker and Matt Blaze could call it a draw and go their separate ways. But of course, the story doesn’t end with Clipper.

Mr. Baker doesn’t mention this in his article, but we’re still living with a much more pertinent example of a ‘government access’ system that failed catastrophically. Unlike Clipper, this failure really did have a devastating impact on the security of real encrypted connections. Indeed, it renders web browsing sessions completely transparent to a moderately clever attacker. Even worse, it affected hundreds of thousands of websites as recently as 2015.

The flaws I’m referring to stem from the U.S. government’s pre-2000 promotion of ‘export’-grade cryptography in the SSL and TLS protocols, which are used to secure web traffic and email all over the world. In order to export cryptography outside of the United States, the U.S. government required that web browsers and servers incorporate deliberately weakened ciphers that were (presumably) within the NSA’s ability to access.

Unsurprisingly, while the export regulations were largely abandoned as a bad job in the late 1990s, the ciphersuites themselves live on in modern TLS implementations because that’s what happens when you inter a broken thing into a widely-used standard. 

For the most part these weakened ciphers lay abandoned and ignored (but still active on many web servers) until this year, when researchers showed that it was possible to downgrade normal TLS connections to use export-grade ciphers. Ciphers that are, at this point, so weak that they can be broken in seconds on single personal computer.

Logjam is still unpatched in Chrome/MacOS as of the date of this post.

At the high watermark in March of this year, more than one out of three websites were vulnerable to either FREAK or LogJam downgrade attacks. This included banks, e-commerce sites, and yes — the NSA website and FBI tip reporting line. Hope you didn’t care much about that last one.

Now you could argue that the export requirements weren’t designed to facilitate law enforcement access. But that’s just shifting the blame from one government agency to another. Worse, it invites us to consider the notion that the FBI is going to get cryptography right when the NSA didn’t. This is not a conceptual framework you want to hang your policies on.

Conclusion

This may sound disingenuous, but the truth is that I sympathize with Mr. Baker. It’s frustrating that we’re so bad at building security systems in this day and age. It’s maddening that we can’t engineer crypto reliably even when we’re trying our very best.

But that’s the world we live in. It’s a world where we know our code is broken, and a world where a single stupid Heartbleed or Shellshock can burn off millions of dollars in a few hours. These bugs exist, and not just the ones I listed. They exist right now as new flaws that we haven’t discovered yet. Sooner or later maybe I’ll get to write about them.

The idea of deliberately engineering weakened crypto is, quite frankly, terrifying to experts. It gives us the willies. We’re not just afraid to try it. We have seen it tried — in the examples I list above, and in still others — and it’s just failed terribly.

12 thoughts on “A history of backdoors

  1. There seems to be a common phenomenon in computing where abilities go from rather weak to fully universal in a single step. We can build systems that are (supposed to be) secure from everyone or that are secure from basically no one, but we can't seem to build anything in between. We can build computers that can only solve very limited sets of problems or that can solve every problem that we know how to solve, but we can't build once that can only solve selected problems.

    I wonder if there is some sort of common theoretical underpinning to both of these?

  2. Nice recap. Baker could clearly use the history lesson. I'm still waiting for terrorists to use gene-printing technology to re-release anthrax and smallpox, like he predicted in his book. Dude could have been a great novelist if he hadn't become a lawyer.

  3. “With only a 16-bit checksum, the latter techniques requires on average 65,536 attempts”. Should the average not be 32,768 due to the birthday paradox?

  4. “Ciphers that are, at this point, so weak that they can be broken in seconds on single personal computer.”
    Not really. oclHashCat's 40-bit RC4 brute force (for Office, but TLS would be similar) still takes 4 hours on a single GPU.

  5. You aren't looking for a collision (i.e. any two messages that share the same tag) so the birthday paradox does not apply. Even if it did it would be ~256 instead.

    However, you are correct in that it does take 32,768 on average because that is half of the keyspace so you will have found a correct answer half of the time.

  6. No, it's 65536, since you're not checking each of the checksums, but instead generating messages and then checking if their checksum matches what you want. Each try has a probability of 1/65536 of succeeding, and so the number of tries required follows a geomwtric distribution with mean 65536.

  7. That's 4 hours for any number of documents or 0.1 second/document if you have about 3 TiB of disk… wait did Atom add “unsalted document” support? There's a bug, the salt is applied before the 40 bit key is generated. Thus making the salt worthless if you can find known plaintext. Which there is a lot of. Even at 4 hours/doc that's super broken.

    I found a bug in a ColdFusion9 library's key generator for encrypting passwords. So Atom and I recently tried to crack the Adobe dump key with this information. The bug is the key is hex and with 3DES the least significant bit in each byte/character is a parity bit (you can thank NSA's contributions for this). This makes the key space crackable. The cost is about 2^51 time and 2^25 memory. This never hit the news because Adobe didn't use the default way to generate keys and therefore the key was no found :(. Same bug is in ColdFusion11, but it's a safer key space, 2^64.

    This would be another awesome example for why backdoors in crypto are bad but meh.

  8. There is. Turing came up with it. It is called “Turing Equivalence” — if it can emulate a Turing Machine, it can do anything any general purpose computer can do. It is a surprisingly low bar.

  9. Hi Matthew,

    It would be great if you can write somenthing about CloudFlare, specifically regarding to the “two ssl connections” that may leave unencrypted information in the CloudFlare's servers.

    Regards from Chile!

Comments are closed.