A missing post (updated)

Update (8/27): I’ve put the original post back up.

Update (8/9): I’ve re-written this post to include a vague, non-specific explanation of the bug. I’ve now confirmed the problem with one vendor, who has asked for a week to issue a patch. So far I haven’t had a response from the DCP’s technical people. And yes, I do realize someone PasteBinned the original post while it was up.

A few people have asked what happened to the post that was in this space just a few hours ago. No, you’re not going crazy. It was here.

The post contained a long, detailed evaluation of the HDCP v2 protocol. My idea was to do real-time evaluation of an industry protocol that hasn’t been deployed yet — a kind of ‘liveblogging’ cryptanalysis. What I expected to find was some bad practices, which I would gently poke fun at. I didn’t expect to find anything serious.

I was wrong in that initial judgement, with some caveats. I’m going to give a vague and non-specific summary here, and I hope to re-post the detailed technical post in a few days when I’ve heard (something, anything!) from DCP, the organization that maintains HDCP.

In case you’ve never heard of it, HDCP is a security protocol used to ‘protect’ video traveling over wired and wireless networks. There are two versions. Version 1 is in your TV today, and was seriously compromised in 2010. Version 2 is much better, but has only been deployed in a few products — including those that implement MiraCast (formerly Wi-Fi Display).

Version 2 contains a key agreement protocol that’s designed to establish a session encryption key between a transmitter (your phone, for example) and a receiver (a TV). Once this key is established, the transmitter can encrypt all video data going over the wire.

What I discovered in my brief analysis is a flaw in the key agreement protocol that may allow a man-in-the-middle to recover the session key (actually the ‘master secret’ used to derive the session key). This could potentially allow them to decrypt content. More on that in a minute, though.

I also discovered some slightly less serious flaws elsewhere in the protocol. It turns out that the DCP already knows about those, thanks to some enterprising work by a smart guy at an unnamed vendor (who deserves credit, and will get it once I put the original post back up).

Now for a few big caveats about the session key bug.

The bug I found does not get you all the way to decrypting HDCPv2 streams in practice, thanks to a tiny additional protection I missed while writing the original post. I don’t think much of this protection, since it involves a secret that’s stored in every single HDCPv2-compliant device. That’s a pretty lousy way to keep a secret.

And of course I haven’t personally verified this in any real HDCP devices, since I don’t own any. Although if I did, I could use this nifty HDCP plugin for WireShark to do some of the work.

The issue has been confirmed by one vendor, who is working on a patch for their product. Their products are used in real things that you’ve heard of, so I’m trusting that they’d know.

The last thing I want to address is why I published this, and why I subsequently pulled it.

When I wrote the original post I thought HDCP v2 was just a ‘paper spec’ — that there were no devices actually using it. Shortly after posting, I came across one commercial product that does claim to support HDCPv2. Later I discovered a few others. To be on the safe side, I decided to pull the post until I could notify the vendors. Then through sheer ineptitude I briefly re-posted it. Now I’m doing my best to put the toothpaste back in the tube.

As soon as I get some feedback I intend to put the post back up. A post which, incidentally, was not intended to break anything, but rather to serve as a lesson in just how complicated it is to design your own protocol. I suppose it’s achieved that goal.

Anyway, I’m putting this up as a placeholder in case you’re curious about what happened or why the heck I’m not blogging. Writing a long technical post and then having to can it is a drag. But hopefully we’ll be back to our regularly-scheduled programming in no time at all.

Indifferentiability

After umpteen weeks writing about broken stuff, I’m thrilled to tell you that for once, nothing awful is happening in the crypto world. It won’t last. But while it does, let’s grab the opportunity to talk about something constructive. 

Now a word of warning: what I’m going to talk about today is fairly wonky and (worse), involves hash functions. If you’re not feeling up for this, this is your cue to bail and go read something nice about buffer overflows.

For those still with me, the subject of this post is the design of hash functions, and more specifically: the indifferentiability proofs that designers write to argue for their security. I was surprised to find that most people have never heard of these proofs, and thus have no idea why they’re useful. That’s too bad, since they’re extremely important to the way we analyze hash functions today.

Merkle-Damgård

This is not Ivan Damgård. (Seriously Google?)

The best way to begin any discussion of hash function design is to take a quick peek inside of the hash functions we actually use. Since the most popular hashes today are MD5 (ugh) and SHA, the right place to start is with the ‘Merkle-Damgård’ paradigm.

To understand Merkle-Damgård, you need to understand that cryptographers love to build complicated things out of simpler components. Under the hood of most block ciphers you’ll find S-boxes. Similarly, if you take the lid off a Merkle-Damgård hash function — surprise! — you find block ciphers. Or at least, something very much like them.

This approach dates to a 1979 proposal by a young cryptographer named Ralph Merkle. What Merkle showed is a way to build hash functions with a variable-length input, using any fixed one-way compression function (a one-way function that spits out fewer bits than it takes in). While Merkle wasn’t specific about the function, he suggested that DES might be a good candidate.

Expressed as a colorful diagram, the Merkle construction looks something like this:

Merkle-Damgård Construction (source: Wikipedia because I’m too lazy to
draw my own diagrams). IV is a fixed value. f is a one-way compression function.

The beauty of Merkle’s proposal is that it’s relatively simple to understand. You simply chop your message into blocks, then feed each block into the function f along with the output of the previous function evaluation. Throw in a finalization stage and you’re done.

Of course there’s a difference between proposing a technique and showing that it actually works. It would take ten more years, but at CRYPTO 1989, Merkle and another cryptographer named Ivan Damgård independently submitted formal analyses of Merkle’s proposal. What they showed is that as long as the function f has certain ideal properties, the resulting hash function is guaranteed to be collision-resistant.  The rest, as they say, is history.

The popularity of Merkle-Damgård can be attributed in part to its security proof. But it also owes something to some major practical advantages:

  1. You can use any secure block cipher as the function f, with just a few tweaks.
  2. M-D hash functions can be pretty darn fast, again depending on f and how you use it.
  3. M-D hashes allow you to digest ‘live’ data streams, where you don’t know in advance how much data you’re going to be hashing. 
Of course, Merkle-Damgård hashes also have serious weaknesses. The most famous is the ‘length extension attack‘ in which an attacker, given only H(M) for some unknown message M, can ‘tack on’ additional blocks of her own choosing. This issue spells big trouble for people who think that H(key || message) is a good Message Authentication CodeWhat’s interesting about the length-extension issue is not that it leads to broken MACs. I mean, that is interesting, and it’s why you should use HMAC. But what’s really interesting is that this flaw doesn’t represent a violation of the collision-resistance guarantee. The two issues are in fact completely orthogonal. And this tells us something fundamental. Namely: collision-resistance is not enough.Today’s implementers do all kinds of crazy things with hash functions, and many of those applications require much more than collision-resistance. To achieve the necessary properties, we first need to figure out what they are. And that requires us to think hard about the following question:

What the heck is a secure hash function?

If you crack a typical security textbook (or visit the Wikipedia page on hash functions), you’ll see a long list of things of things a hash function ‘must’ accomplish. The list usually starts with these:
  1. Collision resistance. It should be hard to find any pair of messages M1, M2 such that H(M1) == H(M2).
  2. Pre-image resistance. Given only h it should be hard to find a ‘pre-image’ M2 such that H(M2) == h.

Now leave aside the technical fact that none of the unkeyed hash functions we use today are ‘truly’ collision-resistant. Or that the above definition of pre-image resistance implies that I can hash my cat’s name (‘fluffy’) and nobody can invert the hash (note: not true. Go ask LinkedIn if you don’t believe me.) The real problem is that these definitions don’t cover the things that people actually do with hash functions.

For example, take the construction of PRNGs. A common PRNG design hashes together large pools of gathered entropy in the hope that the result will be sufficiently uniform for cryptographic work. This is so common that it’s probably happening somewhere on your computer right now. And yet, absolutely nothing in the definitions above implies that this technique is safe!* Similar problems exist for key derivation functions, and even for signature schemes like ECDSA which clearly require hash functions that are more than just collision-resistant.
The more you look into the way that people use hash functions, the more you realize that they really need something that produces ‘random-looking’ output. Unfortunately, this notion is surprisingly hard to formalize. Hash functions are unkeyed, so they’re not pseudo-random functions. What in the world are people asking for?

Random oracles and indifferentiability

The answer, if you dig hard enough, is that people want hash functions to be random oracles.

Random oracles are cryptographers’ conception of what an ‘ideal’ hash function should be. Put succinctly, a random oracle is a perfectly random function that you can evaluate quickly. Random functions are beautiful not just because the output is random-looking (of course), but also because they’re automatically collision-resistant and pre-image resistant. It’s the only requirement you ever need.

The problem with random functions is that you just can’t evaluate them quickly: you need exponential storage space to keep them, and exponential time to evaluate one. Moreover, we know of nothing in the ‘real’ world that can approximate them. When cryptographers try to analyze their schemes with random functions, they have to go off into an imaginary fantasy world that we call the ‘random oracle model.

But ok, this post is not to judge. For the moment, let’s imagine that we are willing to visit this fantasy world. An obvious question is: what would it take to build a random oracle? If we had a compression function that was good enough — itself a random function — could we use a technique like Merkle-Damgård to get the rest of the way?

In 2004, Maurer, Renner and Holenstein gave us a powerful tool for answering this question. What they showed is that it’s always possible to replace functionality A (e.g., a random oracle) with another functionality B (e.g., an ideal compression function) provided that the following rules are satisfied:

  1. There exists a way to ‘construct’ something ‘like’ A out of B.
  2. There exists a way to ‘simulate’ something ‘like’ B using A.
  3. An attacker who interacts with {constructed A-like thing, B} cannot tell the difference (i.e., can’t differentiate it) from {A, simulated B-like thing}

The definition of simulation gets a bit wonky. but expressed in simpler language all this means is: if you can show that your hash function, instantiated with an ‘ideal’ compression function, looks indistinguishable from a random oracle. And you can show that a manufactured compression function, built using a random oracle as an ingredient, looks indistinguishable from an ideal compression function, then you can always replace one with the other. That is, your hash function is ‘good enough’ to be a random oracle.

The following year, Coron, Dodis, Malinaud and Puniya applied this framework to Merkle-Damgård-hash functions. Their first result was immediate: such a proof does not work for Merkle-Damgård. Of course this shouldn’t actually surprise us. We already know that Merkle-Damgård doesn’t behave like a random oracle, since random oracles don’t exhibit length-extension attacks. Still it’s one thing to know this, and another to see a known problem actually turn up and screw up a proof. So far, no problem.

What Coron et al. showed next is much more interesting:
  • They proved formally that Merkle-Damgård can be made indifferentiable from a random oracle, as long as you apply a prefix-free encoding to the input before hashing it. Prefix-free encodings prevent length-extensions by ensuring that no message can ever be a prefix of another.
  • Next, they proved the security of HMAC applied to a Merkle-Damgård hash.
  • Finally, and best of all, they showed that if you simply drop some bits from the last output block — something called a ‘chop’ construction — you can make Merkle-Damgård hashes secure with much less work.

The best part of Coron et al.‘s findings is that the chop construction is already (inadvertently) in place on SHA384, which is constructed by dropping some output bits from its big-brother hash SHA512. The modern hash variants SHA512/224 and SHA512/256 also have this property.** So this theoretical work already has one big payoff: we know that (under certain assumptions) these hashes may be better than some of the others.

And these results have bigger implications. Now that we know how to do this, we can repeat the process for just about every candidate hash function anyone proposes. This lets us immediately weed out obvious bugs, and avoid standardizing another hash with problems like the length extension attack. This process has become so common that all of the SHA3 candidates now sport exactly such an indifferentiability proof.

Of course, in the real world, indifferentiability only takes you so far. It does tell us something, but it doesn’t tell us everything. Sure, if the compression function is perfect, you obtain a strong result about the hash function. But compression functions are never perfect. Real compression functions have glitches and oddities that can make these theoretical results irrelevant. This is why we’ll always need smart people to arm wrestle over which hash we get to use next.

In conclusion

If I had it in me, I’d go on to talk about the SHA3 candidates, and the techniques that each uses to achieve security in this model. But this has already been a long post, so that will have to wait for another time.

I want to say only one final thing.

This is a practical blog, and I admit that I try to avoid theory. What fascinates me about this area is that it’s a great example of a place where theory has directly come to the aid of practice. You may think of hash functions as whizzing little black boxes of ad-hoc machinery, and to some extent they are. But without theoretical analysis like this, they’d be a whole lot more ad-hoc. They might not even work.

Remember this when NIST finally gets around to picking Keccak BLAKE.

Notes:

* For a ridiculous example, imagine that you have a secure (collision-resistant, pre-image resistant) hash function H. Now construct a new hash function H’ such that H'(M) = {“long string of 0s” || H(M)}. This function is as collision-resistant as the original, but won’t be very useful if you’re generating keys with it.

** Thanks to Paulo Barreto for fixing numerous typos and pointing out that SHA512/256 and /224 make excellent candidates for chop hashes!

Flame, certificates, collisions. Oh my.

Update 6/6: Microsoft has given us more details on what’s going on here. If these collision attacks are truly novel, this tells us a lot about who worked on Flame and how important it is. 

Update 6/7: Yes, it’s officially a novel MD5 collision attack, with the implication that top cryptographers were involved in Flame’s creation. We truly are living in the future.

See detailed updates (and a timeline) at the bottom of this post.

If you pay attention to these things, you’ve heard that there’s a new piece of government-issued malware making the rounds. It’s called Flame (or sometimes Skywiper), and it’s basically the most sophisticated — or most bloated — piece of malicious software ever devised. Kaspersky gets the credit for identifying Flame, but the task of tearing it to pieces has fallen to a whole bunch of different people.

Normally I don’t get too excited about malware, cause, well, that kind of thing is somebody else’s problem. But Flame is special: if reports are correct, this is the first piece of malware ever to take advantage of an MD5 certificate collision to enable code signing. Actually, it may be the first real anything to use MD5 certificate collisions. Neat stuff.

What we know is pretty sketchy, and is based entirely on the information-sparse updates coming from Microsoft’s security team. Here’s the story so far:

Sometime yesterday (June 3), Microsoft released an urgent security advisory warning administrators to revoke two Intermediate certificates hanging off of the Microsoft root cert. These belong to the Terminal Services Licensing service. Note that while these are real certificates — with keys and everything — they actually have nothing to do with security: their sole purpose was to authorize, say, 20 seats on your Terminal Services machine.

Despite the fact that these certs don’t serve any real security purpose, and shouldn’t be confused with anything that matters, Microsoft hung them off of the same root as the real certificates it uses for, well, important stuff. Stuff like signing Windows Update packages. You see where this is going?

Actually, this shouldn’t have been a real problem, because these licensing certs (and any new certificates beneath them) should have been recognizably different from code-signing certificates. Unfortunately, someone in Product Licensing appears to have really screwed the pooch. These certificates were used to generate Licensing certificates for customers. And each of those certificates had the code signing bit set.

The result? Anyone who paid for a Terminal Services license could (apparently) sign code as if they were Microsoft. If you’re wondering what that looks like, it looks like this.

This is obviously a bad thing. For many reasons. Mikko Hypponen points out that many organizations whitelist software signing by Microsoft, so even if institutions were being paranoid, this malware basically had carte blanche.

Ok, so far this is really bad, but just a garden-variety screwup. I mean, a huge one, to be sure. But nothing really interesting.

Here’s where that changes. Just today, Microsoft released a new update. This is the new piece:

The Flame malware used a cryptographic collision attack in combination with the terminal server licensing service certificates to sign code as if it came from Microsoft. However, code-signing without performing a collision is also possible.

Now, I’m not sure why the ‘collision attack’ is necessary if code-signing is possible without a collision. But who cares! A collision attack! In the wild! On top-secret government-issued Malware! Dan Brown couldn’t hold a candle to this.

If we look at the Licensing certificates in question, we do indeed see that at least one — created in 2009 — uses the MD5 hash function, something everyone knows is a bad idea:

Certificate:    Data:        Version: 3 (0x2)        Serial Number:            3a:ab:11:de:e5:2f:1b:19:d0:56    Signature Algorithm: md5WithRSAEncryption        Issuer: OU=Copyright (c) 1997 Microsoft Corp., OU=Microsoft Corporation, CN=Microsoft Root Authority        Validity            Not Before: Dec 10 01:55:35 2009 GMT            Not After : Oct 23 08:00:00 2016 GMT        Subject: C=US, ST=Washington, L=Redmond, O=Microsoft Corporation, OU=Copyright (c) 1999 Microsoft Corp., CN=Microsoft Enforced Licensing Intermediate PCA        Subject Public Key Info:            Public Key Algorithm: rsaEncryption                Public-Key: (2048 bit)

And so… Well, I don’t really know. That’s where we run out of information. And end up with a lot of questions.

For one thing, why did the Flame authors need a collision? What extra capabilities did it get them? (Update: the best theory I’ve heard comes from Nate Lawson, who theorizes that the collision might have allowed the attackers to hide their identity. Update 6/6: The real reason has to do with certificate extensions and compatibility with more recent Windows versions. See the end of this post.)

More importantly, what kind of resources would it take to find the necessary MD5 collision? That would tell us a lot about the capabilities of the people government contractors who write malware like this. In 2008 it took one day on a supercomputer, i.e., several hundred PS3s linked together.

And just out of curiosity, what in the world was Microsoft doing issuing MD5-based certificates in 2009? I mean, the code signing bit is disastrous, but at least that’s a relatively subtle issue — I can understand how someone might have missed it. But making MD5 certificates one year after they were definitively shown to be insecure, that’s hard to excuse. Lastly, why do licensing certificates even need to involve a Certificate Signing Request at all, which is the vector you’d use for a collision attack?

I hope that we’ll learn the answers to these questions over the next couple of days. In the mean time, it’s a fascinating time to be in security. If not, unfortunately, a very good time for anyone else.

***

Update 6/5: For those who aren’t familiar with certificate collision attacks, I should clear up the basic premise. Most Certificate Authorities sign a Certificate Signing Request (CSR) issued by a possibly untrustworthy customer. The idea of an MD5 collision-finding attack is to come up with two different CSRs that hash to the same thing. One would be legitimate and might contain your Terminal Services licensing number. The other would contain… something else. By signing the first CSR, Microsoft would be implicitly creating a certificate on the second batch of information.

Finding collisions is a tricky process, since it requires you to muck with the bits of the public key embedded in the certificate (see this paper for more details). Also, some CAs embed a random serial number into the certificate, which really messes with the attack. Microsoft did not.

Finally, some have noticed that Microsoft is still using a root certificate based on MD5, one that doesn’t expire until 2020, and hasn’t been revoked. What makes this ok? The simple answer is that it’s not ok. However, Microsoft probably does not sign arbitrary CSRs with that root certificate, meaning that collision attacks are not viable against it. It’s only the Intermediate certs, the ones that actually sign untrusted CSRs you need to worry about — today.

***

Update 6/6: Microsoft has finally given us some red meat. Short summary: the Terminal Services Licensing certs do work to sign code on versions of Windows prior to Vista, with no collision attack needed. All in all, that’s a heck of a bad thing. But it seems that more recent versions of Windows objected to the particular X.509 extensions that were in the TS licensing certs. The government (or their contractors) therefore used a collision attack to make a certificate that did not have these extensions. From what I’m reading on mailing lists, the collision appears to be “new, and of scientific interest”.

If this pans out, it’s a big deal. Normally the government doesn’t blow a truly sophisticated cryptanalytic attack on some minor spying effort. Getting MD5 collisions up and running is a big effort; it’s not something you can do from a cookbook. But developing novel collision techniques is a step beyond that. I take this to mean that Flame’s authors were breaking out the big guns, which tells us something about its importance to our government. It also tells us that the creation of Flame may have involved some of the top mathematicians and cryptographers in (our?) intelligence services. This is an important datapoint.

***

Update 6/7: It looks like it does pan out. Marc Stevens — one of the authors of the earlier MD5 certificate collision papers — confirms that Flame’s certificate uses a novel collision-finding technique.

“Flame uses a completely new variant of a ‘chosen prefix collision attack’ to impersonate a legitimate security update from Microsoft. The design of this new variant required world-class cryptanalysis”

We can only speculate about why this would be necessary, but a good guess is that Flame is old (or, at very least, the crypto techniques are). If the collision technique was in the works prior to 2009 (and why wouldn’t it be?), Stevens et al. wouldn’t have had much relevance. For those who like details, a (very incomplete) timeline looks something like this:

  1. Mid-1990s: MD5 is effectively obsolete. Switch to a better hash function!
  2. August 2004: Wang and Yu present first (‘random’), manually-found collision on MD5.
  3. 2004-2007: Collisions extended to ‘meaningful’ documents. Lots of stuff omitted here.
  4. May 2007: (CWI) Stevens et al. publish a first impractical technique for colliding certificates.
  5. Late 2008: (CWI) Stevens et al. develop a practical certificate collision.
  6. December 2008: Vendors notified.
  7. Feburary 2009: Paper submitted to CRYPTO.
  8. Summer 2009: Source code released.

Even assuming that CWI team had perfect security with their result, conference program committees are hardly built for secrecy — at least, not from goverments. So it’s reasonable to assume that the Stevens et al. technique was known by February 2009, possibly earlier. Which means that the Flame developers may have had their own approach under development sometime before that date.

The really interesting question is: when did secret collision research get ahead of the public, academic stuff? Post-2007? Post-2004? Pre-2004? I doubt we’ll ever know the answer to this question. I sure would love to.

Update 6/12: Alex Sotirov has a great Summercon presentation with many of the details. The word from those who know is: don’t take his $20,000 figure too literally. We know nothing about the techniques used to find this collision.

SSL MITM Update

The other day I snarked about Trustwave’s decision to selltrustwavelogo subordinate root (‘skeleton’) certificates to their corporate clients, for the explicit purpose of destabilizing the web’s Public Key Infrastructure ‘legitimately’* intercepting TLS connections. This practice (new to me) is ostensibly only permitted in limited, controlled settings (usually to spy on a company’s employees).

Trustwave argues that the key was always safe inside of a Hardware Security Module and besides, they’re not doing it any more. (Kind of like saying that you handed out the master key to every door on earth but it’s ok ’cause you chained it to a hubcap.)

Unfortunately, the really bad news is that Trustwave may not be the only major CA implicated in this practice. And at least one browser vendor is planning to do something about it:

Dear Certification Authority, 
This note requests a set of immediate actions on your behalf, as a 

participant in the Mozilla root program.  

Please reply by {date 2 weeks out} to confirm completion of the 
following actions or state when these actions will be completed.  

1) Subordinate CAs chaining to CAs in Mozilla’s root program may not be 
used for MITM purposes, regardless of whether it is in a closed and 
controlled environment or not. Please review all of your subordinate CAs 
to make sure that they may not be used for MITM purposes. Any existing 
subordinate CAs that may be used for that purpose must be revoked and 
any corresponding HSMs destroyed by {date TBD}. For each subordinate CA 
that is revoked, send me: 
a) The certificate that signed the subCA. If it is a root certificate in 
NSS, then the root certificate’s subject and SHA1 fingerprint. 
b) The Serial Number of the revoked certificate. 
c) The CRL that contains the serial number of the revoked certificate. 

As a CA in Mozilla’s root program you are ultimately responsible for 
certificates issued by you and your subordinate CAs. After {date TBD} if 
it is found that a subordinate CA is being used for MITM, we will remove 
the corresponding root certificate. Based on Mozilla’s assessment, we 
may also remove any of your other root certificates, and root 
certificates from other organizations that cross-sign your certificates.  

2) Please add a statement to your CP/CPS committing that you will not 
issue a subordinate certificate that may be used for MITM or traffic 
management of domain names or IPs that the party does not legitimately 
own or control. Send me the URL to the updated document(s) and the 
impacted sections or page numbers. 

Participation in Mozilla’s root program is at our sole discretion, and 
we will take whatever steps are necessary to keep our users safe. 
Nevertheless, we believe that the best approach to safeguard that 
security is to work with CAs as partners, to foster open and frank 
communication, and to be diligent in looking for ways to improve. Thank 
you for your participation in this pursuit. 

Regards, 
Kathleen Wilson 
Module Owner of Mozilla’s CA Certificates Module 

Now I’m no bomb-thrower, but if it were up to me, {Date TBD} would be yesterday and there would be no re-entry for the CAs caught doing this kind of thing. Still, I’m glad that Mozilla is doing this, and we’re all lucky that they have the independence and browser share to force this kind of change.

But not everything is sunshine and rainbows:

  1. We have to trust that the CAs in question will respond honestly to Mozilla’s inquiry and will voluntarily exit a (presumably) lucrative business. This relies very much on the honor system, and it’s hard to presume much honor in a CA that would sell such a product in the first place.
  2. Mozilla only represents 25% of the browser share, and that seems to be falling. That’s probably enough to make the difference — today — but it’d be nice to hear something similar from Microsoft or Google.
  3. We still lack a good client-side mechanism for detecting and reporting unusual (that is: correctly signed, but inconsistent) certificates. Given the news from Trustwave, such a mechanism seems more useful than ever.

We cannot possibly have faith in the security of the Internet when CAs are willing to engage in this kind of practice — even if they do it under the most ‘carefully-controlled’ conditions.

* Whatever that means.

Is there an Enigma bubble?

First it was .com stocks, then it was housing. Now it’s WWII-era German Enigma machines. From a recent CNN story:

An Enigma machine which featured in a Hollywood movie about the codebreakers of World War II has smashed auction estimates and sold for a world record price. 

The encoding device sparked a three-way bidding war when it went under the hammer at Christie’s in London Thursday, selling for £133,250 ($208,137) — more than double the upper estimate of £50,000. 

Christie’s said the previous record for an Enigma machine was £67,250, at the same auction house, in November 2010.

I for one would love to own an Enigma. But unless it’ll lead me to a cache of buried Nazi gold I have to draw the line at $100,000. It’s not like the Enigma algorithm is getting better.

But lack of funding doesn’t mean you shouldn’t be creative.

When I worked at AT&T I was told an (apocryphal?) story about a noted cryptographer who couldn’t afford to purchase an Enigma for himself, so he set out instead to blackmail one out of the NSA. Allegedly it took him only four conference submissions before they gave in. The last paper described how to attack a significant cryptosystem with paper and pencil.

This sounds so improbable that I can’t believe it really happened — which means that it probably did. If anyone knows the story and has a source for it, please drop me a line.

How (not) to use symmetric encryption

This is supposed to be a blog about cryptographic engineering. brass_key_corkscrew12That means crypto, but it also means software engineering, protocol design, HCI and hardware engineering (fair warning: my hardware experience mostly consists of solder burns).

And yet, in describing the attacks of the past few weeks, I’ve mostly been talking about basic symmetric encryption. This seems to be an area that people get wrong, no matter how straightforward it sounds.

So at the risk of boring my readership to tears — I’m going to spend the next two posts talking about the dos and don’ts of symmetric encryption, particularly symmetric encryption with block ciphers. I may also branch out a little into key derivation and management, but I know you’ll be understanding.

I realize that for some of you this may not make for scintillating reading, but here’s my thinking: we do it once now, we’ll never have to discuss it again. Right?

Excellent. In this first post I’m going to start with the don’ts.

Symmetric Encryption Don’t #1: Don’t encrypt with ECB mode

Block ciphers are designed to process discrete chunks of data. For example, AES works on 128-bit blocks. To encrypt longer messages with them, we use one of several “modes of operation“. These modes are not all created equal.

 Tux image: Larry Ewing. (I will not
thank the GIMP, no matter what
his license says.)

ECB (Electronic Codebook) mode is by far the stupidest. Essentially you’re just chopping the message up into blocks, then using the raw block cipher to encipher each block individually. There are two problems with ECB mode:

  1. It’s not randomized. This means that anytime you encrypt a given message under a key, you’ll get exactly the same ciphertext. This goes for substrings of the message as well.
  2. It treats each block of the message independently. As a result, some of the structure of the message can leak through. This includes things like padding, which will produce predictable patterns in the ciphertext.

The first point can be a problem in some circumstances. Imagine, for example, that you’re sending a relatively small number of messages (e.g., commands for a remote system). Every time you send a given command, you’re sending exactly the same ciphertext. This gets obvious pretty fast.

I would say that the second problem is the more serious one. Perhaps you’ve seen Wikipedia’s classic image of an ECB-mode encrypted TIFF file (right). But probably this seemed a little contrived to you — after all, who uses TIFF files anymore?

So allow me to give my favorite example of why ECB mode is problematic. This image comes from a software packaging system that used ECB mode to encrypt executable files. All I’ve done is open one of those encrypted files as a raw bitmap image. You’ll have to squint a little.

An executable file encrypted using ECB mode. Click to see a larger version.

This doesn’t give away the contents of the executable, but it gives a pretty good picture of where different segments are. Just look for the funny patterns and tire tracks. Just having this little bit of information might give you a nice head start on finding those segments when they’re in memory, which is potentially what you’re going to do next.

Symmetric Encryption Don’t #2: Don’t re-use your IVs

Every block cipher mode of operation except for ECB (which you shouldn’t use!) employs a special per-message nonce called an Initialization Vector, or IV. The basic purpose of an IV is to ensure that the encryption function works differently every time; it adds an element of randomness, or at least unpredictability to your ciphertexts.

Unfortunately, developers seem genuinely stumped by IVs. Maybe this isn’t their fault. Every mode of operation has slightly different rules about how to pick IVs, and a slightly different set of consequences for when you screw it up.

So let’s start with something simple. No matter what mode you use, this kind of thing is never ok:

This chunk of bad advice comes from an ancient (and hopefully obsolete) version of the AACS specification. But it’s hardly the exception. Grep for “IV” in the source repositories of just about any major software house, and I guarantee you’ll find plenty of stuff like this.

Why is this a problem? Let me count the ways:

  1. At a minimum, it eliminates any random behavior in the encryption scheme. With a fixed IV, a given message will always encrypt to the same ciphertext (if you’re using the same key). This goes for two messages that are the same up to a certain point. See my discussion of ECB mode above for why this can be a problem.
  2. If you’re using a stream cipher mode of operation like CTR or OFB, it’s a disaster. If you encrypt two different messages with the same key, and the IV is fixed, then an attacker can XOR two ciphertexts together. This will give them the XOR of the two underlying plaintexts. (Think this will be useless? I doubt it, especially if they’re clever.)

    By the way, this kind of thing also happens when people forget to change the IV when encrypting multiple versions of a file. Don’t do that either.

  3. If you’re using a chaining mode like CBC, use of a fixed IV can still lead to plaintext recovery. See, for example, this chosen plaintext attack on TLS, which only requires the adversary know which IV is being used. This type of attack is pretty tricky to implement, but it’s definitely possible.
  4. It will make you look stupid and embarrass you when a professional audits your code.

Clearly some of these issues are application-specific. Maybe you don’t think anyone will be able to leverage them. You might even be right — in this version of the application. But sooner or later, you or a future developer will bolt on new features, deploy it in the cloud, or make it into a browser applet. When they do, all of these issues will magically go from theoretically vulnerable to stupidly vulnerable.

And people will blame you.

So how do you use IVs correctly? I’ll talk about this more in my next post. But if you’re really chomping at the bit, my advice is to take a look at the FIPS specification for block cipher modes. (I must warn you, however: please don’t operate heavy machinery while reading these documents.)

Symmetric Encryption Don’t #3: Don’t encrypt your IVs

So you’ve generated your IV correctly, you’ve used it correctly, but now you’re hung up on a final question: what do I do with this darn thing?As I’ve said, IVs make people nervous. People know they’re not keys, but they’re not ciphertexts either. They wonder: is this an important value? Should I just send it over the wire as it is? Hmm, just to be safe, I’d better encrypt it. Even if I’m wrong, it can’t hurt.

As this Reddit commenter can attest, what you don’t know can hurt you.

Here’s a simple rule of thumb. IVs are not keys. They’re not secret. If you’ve chosen the IV correctly, you can send it along with your ciphertext in the clear. You should authenticate it (see below), but you should never encrypt it.

The worst thing you can do is encrypt your IVs using the same key that you’re using to encrypt messages. The absolute worst example is when you’re using CTR mode encryption, and you make the mistake of encrypting your IV using ECB mode. When you do this, anyone can XOR the first block of ciphertext with the encrypted IV, and obtain the plaintext of that block.

These problems aren’t limited to CTR. My advice: have faith in your IVs, and they’ll have faith in you.

Symmetric Encryption Don’t #4: Don’t forget to authenticate your ciphertexts

Once upon a time cryptographers looked at encryption and authentication as two unrelated operations. Encryption was for protecting the confidentiality of your data, and authentication was used to keep people from tampering with it.*

Nowadays we know that the two are much more tightly linked. You may not care about people tampering with your data, but your encryption scheme just well might. The problem is active attacks. See here and here for just a couple of examples.

To make a long story short, many of the clever attacks on symmetric encryption schemes these days require an attacker to tamper with ciphertexts, then submit them to be decrypted. Even if the decryptor leaks just a tiny bit of information (e.g., is the padding correctly formatted, is the message properly formatted), that can be enough to gradually peel away the encryption and recover the plaintext.

Obviously you don’t want this.

The very elegant solution is to authenticate your ciphertexts, and not just in any willy-nilly fashion. There are basically two approaches that won’t lead to heartburn down the road:

  1. Best: use an authenticated mode of operation, such as GCM, CCM, OCB or EAX. These modes handle encryption and authentication in one go (and they can even authenticate some optional unencrypted ‘associated’ data for you). Best yet, they use a single key.
  2. Almost as good: first encrypt your message using a secure mode of operation (say, CTR), then compute a Message Authentication Code (e.g., HMAC-SHA1) on the resulting ciphertext and its IV. Use two totally different keys to do this. And please, don’t forget to MAC the darn IV!

What you should not do is apply the MAC to the plaintext. First of all, this won’t necessarily prevent active attacks. Padding oracle attacks, for example, can still be leveraged against a scheme that authenticates the message (but not the padding). Furthermore, even if you MAC the padding, there’s still a slim possibility of timing attacks against your implementation.

Symmetric Encryption Don’t #5: Don’t CBC-encrypt in real time

Let me use this space to reiterate that there’s nothing wrong with CBC mode, provided that you use it correctly. The unfortunate thing about CBC mode is that there are many ways to use it incorrectly.

Knowing this, you shouldn’t be surprised to hear that CBC is the most popular mode of operation.

This ‘don’t’ is really a variant of point #2 above. CBC mode can be insecure when an attacker has the ability to submit chosen plaintexts to be encrypted, and if the encryption is on a live stream of data where the adversary can see the ciphertext blocks immediately after they come out (this is called ‘online’ encryption). This is because the adversary may learn the encryption of the previous plaintext block he submitted, which can allow him to craft the next plaintext block in a useful way.

If he can do this, he might be able to take some other ciphertext that he’s intercepted, maul it, and feed it through the encryptor. This kind of attack is challenging, but given the right circumstances it’s possible to decrypt the original message. This attack is called a blockwise chosen plaintext attack, and it’s essentially what the BEAST attack does.

Symmetric Encryption Don’t #6: Don’t share a single key across many devices

A wise man once said that a secret is something you tell one other person. I’m not sure he realized it, but what he was saying is this: don’t put the same symmetric key into a large number of devices (or software instances, etc.) if you want it to remain secret.

About the fastest way to lose security in a system is to spread a single key broadly and widely. It doesn’t matter if you’ve embedded that key inside of a tamper-resistant chip, buried in a block of solid concrete, and/or placed it in a locked file cabinet with a sign saying ‘beware of leopard‘.

The probability of your system being compromised goes up exponentially with each additional copy of that key. If you’re doing this in your current design, think hard about not doing it.

Symmetric Encryption Don’t #7: Don’t pluralize your keys using XOR

(credit)

This one is really a flavor of #5, but a more subtle and stupid one.

Key ‘pluralization’ refers to a process where you obtain multiple distinct keys from a single master key, or ‘seed’. Usually this is done using some strong cryptographic function, for example, a pseudo-random function.

This happens all over the place. For example: TLS does it to derive separate MAC and encryption keys from a master secret. But an extreme type of pluralization often occurs in large-scale systems that provide unique keys to a large number of users.

Think of a cellular provider distributing SIM cards, for example. A provider could generate millions of random authentication keys, distribute them to individual SIM cards, and then store the whole package in a back-end database. This works fine, but they’d have to store this database and do a lookup everytime someone contacts them to authorize a phone call.

Alternatively, they could start with one short master seed, then pluralize to derive each of the SIM keys on demand. This process would take as input the seed plus some auxiliary value (like the subscriber ID), and would output a key for that user. The advantage is that you no longer need to keep a huge database around — just the tiny initial seed.

This approach is so tempting that sometimes people forget about the ‘strong cryptographic function’ part, and they derive their keys using tools that aren’t so secure. For example, they might just XOR the master seed with the subscriber or device ID.

No, you say, nobody could be that dumb. And yet KeeLoq was. Millions of cars keys were provisioned with cryptographic keys that were generated this way. It turns out that if you can extract any one of those per-car keys, and if you know the device’s serial number, you can easily recover the master key — which means you can break into any other car.

Symmetric Encryption Don’t #8: Don’t use DES or RC4 or @#(*&@!

Ok, I said this was mostly going to be about block ciphers. DES fits that category, and I hope you know why not to use it. But RC4 also deserves a special mention just for being the world’s most popular dubious stream cipher.

RC4 shows up everywhere. It shows up in products. It shows up in malware. Basically, it shows up anywhere someone needed crypto, but was too lazy to download a copy of AES. Hell, I’ve used it myself — um, for personal reasons, not for work, mind you.

If you use RC4 correctly, it’s probably ok. For now. The problem is twofold. First of all, cryptanalysts are slowly chipping away at it — sooner or later they’re going to make some serious progress.

The second problem is that it’s not always used securely. Why not? You might as well ask why meth labs explode at a disproportionate rate. My guess is that the set of people who use caution when mixing volatile chemicals simply doesn’t overlap well with the set of people who cook methamphetamine. Ditto RC4 and proper usage.

I could waste a lot of time going on about all of this, but instead I’m just going to quote Thomas Ptacek:

if you see a bespoke crypto design, and it dates from after 2000, and it uses RC4, that’s an audit flag.

Now if Thomas says this about RC4, what do you think he’s going to say about your homebrew protocol based on the Russian GOST cipher? That’s right: nothing nice. Don’t let it happen to you.

In Summary

So this is where I leave you. I doubt this list is complete — I’ll try to update it if I think of anything else. At the very least, if we could fix these issues it would knock out a healthy chunk of the silly crypto issues I see on a day to day basis.

Oh, and a pony too.

Ok, so, I’m a little skeptical that all of these problems will go away that easily, but I’d be content with even just one or two of the points. So if you’re designing a new crypto product and could spare a minute just to glance at the above, you would certainly make my day.

Notes:

* Honestly, there was even a certain amount of confusion on this point. If you look at old protocols like  Needham-Schroeder, you’ll see that they basically treat encryption as authentication. Don’t do this. Most common modes of operation are malleable, meaning that you can mess with the ciphertext, cut and paste different ones, etc.

Oh my…

Neal Koblitz is not a happy camper:

At Crypto 2011 the Program Chair, Phil Rogaway, explained that the paper [2] was chosen for the Best Paper Award “overwhelmingly” by the Program Committee, which “praised the work for its broad appeal…and its potential impact.”

And, indeed, the treatment of hashed ElGamal encryption in [2] is in some sense a remarkable achievement. Usually one has to leave the sciences entirely if one wishes to find works of scholarship — for example, Michel Foucault’s turgid 760-page three-volume philosophical treatise on sex [14] — that have been so successful in turning something that should be interesting and accessible to everyone into something lengthy, unreadable, and boring.