Thursday, June 21, 2012

A bad couple of years for the cryptographic token industry

SafeNet eToken PRO Anywhere
There was a time just a few weeks ago when it seemed like the CRYPTO 2012 accepted papers list might not be posted in time for, well, CRYPTO 2012. Fortunately the list is public now, which means (1) we've avoided the longest rump session in the history of cryptography, and (2) I get to tell you about a particularly neat paper.

The paper in question is 'Efficient Padding Oracle Attacks on Cryptographic Hardware' by Bardou, Focardi, Kawamoto, Simionato, Steel and Tsay. This is a typically understated academic title for a surprisingly nifty result.

Here's the postage stamp version: due to a perfect storm of (subtle, but not novel) cryptographic flaws, an attacker can extract sensitive keys from several popular cryptographic token devices. This is obviously not good, and it may have big implications for people who depend on tokens for their day-to-day security. If that describes you, I suggest you take a look at this table:
Tokens affected by the Bardou et al. attacks. A checkmark means "vulnerable". (source). 
That's the headline news, anyway. The more specific (and important) lesson for cryptographic implementers is: if you're using PKCS#1v1.5 padding for RSA encryption, cut it out. Really. This is the last warning you're going to get.

So much for the short version. Keep reading for the long one.
What are cryptographic tokens?
If you've ever developed cryptographic software, you know that general-purpose computers aren't the safest place to store keys. Leaving aside stuff like this, all it takes is one nasty software bug to send all your important key material (e.g., TLS server keys, certificate signing keys, authentication keys) into someone else's hands.

Since it seems increasingly difficult to harden standard computers, a popular solution is to simply put the keys somewhere else. The most popular 'somewhere' is onboard a cryptographic co-processor. When the co-processor is expensive and powerful, we often refer to it as a Hardware Security Module (HSM). When it's removable and consumer-grade we call it a 'token'. Most tokens are used for authentication purposes (think SecurID), but they're useful for other things like disk encryption. Modern tokens typically look like USB sticks.

Tokens operate under some of the most challenging conditions you can imagine. The computer they're connected to may be completely pwned. An attacker could steal a token and physically dismember it in an attempt to extract its secrets. None of this is easy to deal with, and manufacturers have invested a lot of time and money getting this right.

Unfortunately, what Bardou et al. are telling us is: maybe they didn't invest enough. Several popular token brands are still vulnerable to cryptographic attacks that have been well-known since the 1990s. And that is bad news.
But this paper is brand new. Why are you talking about 'a bad couple of years'?
While this new paper is a bad result for the token industry, it's not the first bad news we've had about these devices.

Let me explain. You see, while tokens won't let you extract sensitive keys, they make an exception if the sensitive keys are encrypted (aka: 'wrapped'). This makes a lot of sense. How else are you supposed to import/export/back up keys from a token? In theory, this encryption (wrapping) should keep the keys safe even if an attacker has totally compromised your computer.

An older problem -- discovered by a series of researchers in 2009 and 2010 -- is that token manufacturers were careless in the way they implemented the PKCS#11 Cryptographic Token Interface Standard. The spec says that encryption keys are to be used for either (a) encryption/decryption of normal data, OR (b) for wrapping sensitive keys. But never both. If you allow users to perform both operations with a given key, you enable the following attack:
  1. Ask the token to encrypt a sensitive key K and export the ciphertext.
  2. Ask the token to decrypt the ciphertext and hand you the result. 
  3. Congratulations, you have K!
It's hard to believe that something this 'stupid' actually worked, but for some tokens it did. The whole mess reached a head in 2010 when Bortolozzo, Focardi, Centenaro and Steel automated the process of finding API bugs like this, and were able to 'break' a whole bunch of tokens in one go.

(Token manufacturers were not happy with this, by the way. At CCS 2010, Graham Steel described being invited to a major French token manufacturer where the CEO held him captive and berated him gently tried to persuade him not to publish his results. Such are the hazards of responsible disclosure.)

The above issues should be long fixed by now. Still, they raised questions! If token manufacturers can't implement a secure API, how well are they doing with things like, say, complicated cryptographic algorithms? And that's the question this new paper tries to answer.
So what's wrong with the crypto?
What the CRYPTO paper shows is that several commercial tokens (but not all of them!) are vulnerable to practical padding oracle attacks on both symmetric (AES-CBC) and asymmetric (RSA PKCS #1v1.5) encryption. These attacks are a huge problem, since they give an attacker access to keys that otherwise should be safely encrypted outside of the token.

Padding attacks take different forms, but the general intuition is this:
  1. Many encryption schemes apply padding to the plaintext before encryption.
  2. After decrypting a given ciphertext, most implementations check to see if the padding is valid. Some output a recognizable error (or timing differential) when it isn't.
  3. By mauling a legitimate (intercepted) ciphertext in various ways, the attacker can submit many modified versions of the ciphertext to be decrypted. Some of these will have valid padding, others will produce error results. Suprisingly, just these bits of information can be sufficient to gradually reveal the underlying plaintext.
For tokens, you start with an encrypted key you want to learn, and you submit it to the token for import. Normally the token will just accept it, which isn't very helpful. So instead, you modify the ciphertext in some way, and send the mangled ciphertext to be imported. If the token responds with something like 'could not import key: the ciphertext had bad padding', you may be able to run a padding oracle attack.

(Note that it doesn't take an explicit error: if decryption is simply faster when the padding is bad, you've got everything you need.)

The most famous (and efficient) padding oracle attack is Vaudenay's attack on padded CBC-mode encryption, proposed back in 2002. (See here for an idea of what the attack looks like.) The first finding of Bardou et al. is that many tokens are vulnerable to Vaudenay's attack. Yikes.

But what about RSA encryption? This is important, since many tokens only support RSA to import secret keys.

Fortunately (for the researchers) RSA encryption also uses padding, and even better: these tokens use a scheme that's long been known to have padding oracle problems of its own. That scheme is known as RSA-PKCS#1v1.5, and it was famously broken by Daniel Bleichenbacher in the late 1990s. As long as these implementations produce obvious errors (or timing differences) when they encounter bad padding, there should be an 'off the shelf' attack that works against them.

But here the researchers ran into a problem. Bleichenbacher's attack is beautiful, but it's too slow to be useful against tokens. When attacking a 1024-bit RSA key, it can require millions of decryption attempts! Your typical consumer-grade token token can only compute a few decryptions per second.

No problem! To make their attack work, Bardou et al. 'simply' optimized Bleichenbacher's attack. The new version requires mere thousands (or tens of thousands) of decryption attempts, which means you can unwrap sensitive keys in just a few minutes on some tokens.

This table tells us how long the attacks actually took to run against real tokens:
Timing results for the optimized Bleichenbacher attacks (on 1024-bit RSA encryptions). 'Oracle' indicates the nature of the RSA decryption/padding check implementation. (source)
Is my HSM secure?
If you're an HSM manufacturer I have 'good' news for you. You're ok. For now.

Oh, not because your HSM is secure or anything. Hilariously, the researchers were unable to run their attacks on a commercial HSM because they couldn't afford one. (They can costs upwards of EUR20,000.) Don't get complacent: they're working on it.
How does the RSA attack work?
For the real details you should see the paper. Here I can offer only the tiniest little intuition.

The Bleichenbacher attack relies on the fact that RSA is homomorphic with respect to multiplication. That means you can multiply a hidden RSA plaintext (the "m" in "m^e mod N") by some known value "s" of your choosing. What you do is compute "m^e * s^e mod N". The decryption of this value is "(m*s) mod N".

Now consider the details of RSA-PKCS#1v1.5. To encrypt a legitimate message, the encryptor first pads it with the following bytes, and treats the padded result as the integer "m" to be RSA encrypted.

0x 00 02 { at least 8 non-zero random bytes } 00 { message }

So you've intercepted a valid, PKCS#1v1.5 padded RSA ciphertext, and you have access to a decryptor who knows the secret key. The trick now is to 'mangle' the legitimate ciphertext in various clever ways and submit these new results to the decryptor.

The basic approach is to multiply the ciphertext by different chosen values "s^e mod N" (for known "s") until eventually you find one that does not produce a padding error. This means that the decrypted value "(m*s mod N)" is itself a PKCS-padded message. This seems inconsequential, but actually it tells you quite a lot. Presumably you know "s", since you chose it. You also know that "m*s mod N" is an integer of a very specific form, one that satisfies the padding checks. By adaptively choosing new "s" values and submitting many, many ciphertexts for decryption, these tiny bits of information will allow you to 'zero in' on the underlying plaintext.

The authors of this paper describe several optimizations to Bleichenbacher's attack. What I find most interesting is the way they exploited differences in the way that tokens actually implement their padding checks. It turns out that this nature of this implementation makes a huge difference to the efficiency of the attack.

Five possible 'implementations' are described below.
  1. FFF: padding is 'ok' only if correctly padded and plaintext is of a specific length (e.g., it's a 128-bit AES key)
  2. FFT: padding is 'ok' only if correctly padded, but plaintext can be any length.
  3. FTT: same as above, but also allows 0s in the "non-zero random bytes".
  4. TFT: same as above, but 'ok' even if there are no zeros after the first byte.
  5. TTT: padding is 'ok' as long as it starts with '0x 00 02'.
These all look quite similar, but have very different practical implications!

You see, intuitively, the Bleichenbacher attack only 'works' if sometimes the decryptor says 'hey, this mauled ciphertext has good padding!'. As a general rule, the more often that happens, the faster the attack works.

It goes without saying that the TTT padding check is a lot more likely to produce 'good padding' results than the 'FFF', and correspondingly for the intermediate ones. FFF is so challenging that it's basically impractical. The 'good' (uh, 'bad') news is that many tokens tend towards the more useful implementations. And as you can see in the table below, little details like this have huge impacts.
Total number of decryption queries required to decrypt an RSA-PKCS#1v1.5-encrypted symmetric (AES) key.  'Original algorithm' (blue box) shows the mean/median performance of the standard Bleichenbacher attack. 'Modified algorithm' (red box) is the mean/median performance of the new, optimized algorithm. Note that the specific implementation of the padding check (FFF/TTT etc.) makes a huge difference in the cost. (source) 
What can I do to protect my own code from padding oracle attacks?
If you're using symmetric encryption, that's easy. Simply use authenticated encryption, or put a MAC on all of your ciphertexts. Do this correctly and Vaudenay padding oracles will not trouble you.

RSA encryption is more difficult. The 'best practice' in implementing RSA is: don't implement RSA. Other people have done it better than you can. Go find a good implementation of RSA-OAEP and use that. End of story.

Unfortunately, even with RSA-OAEP, this can be difficult. There are attacks on OAEP encryption too (some are mentioned in this paper). And if you're working in hardware, life is particularly difficult. Even small timing differentials can leak details about your padding check. If this describes your use case: find a professional and be very, very careful.
Are there any broad security lessons, any wisdom, to be drawn from this result?
Usually the lesson I glean from these attacks is: people make mistakes. Which is barely a lesson, and it's certainly not wisdom. For some of the issues in this paper, namely the symmetric padding oracle attacks, that's still about what I get. Manufacturers made unforced errors.

But the RSA-PKCS#1v1.5 attacks are, to me, a bit more meaningful. I run into bad implementations of this obsolete standard this all the time. Most of the time when I bring it up, nobody really cares. The typical response is something like: it's called the 'million message attack' for a reason. There's no way anyone run that thing against our system.

Bruce Schneier says 'attacks only get better, they never get worse'. In my opinion, the lesson of this paper is that attacks not only get better, but if you go around tempting fate, they're quite capable of getting better by precisely the amount necessary to break your (high-value) system. You should never use a broken primitive just because the 'known' attack seems impractical. 

And while this may not be 'wisdom' either, I think it's advice to live by.

Monday, June 4, 2012

Flame, certificates, collisions. Oh my.

(source/cc)
Update 6/6: Microsoft has given us more details on what's going on here. If these collision attacks are truly novel, this tells us a lot about who worked on Flame and how important it is. 

Update 6/7: Yes, it's officially a novel MD5 collision attack, with the implication that top cryptographers were involved in Flame's creation. We truly are living in the future.

See detailed updates (and a timeline) at the bottom of this post.

If you pay attention to these things, you've heard that there's a new piece of government-issued malware making the rounds. It's called Flame (or sometimes Skywiper), and it's basically the most sophisticated -- or most bloated -- piece of malicious software ever devised. Kaspersky gets the credit for identifying Flame, but the task of tearing it to pieces has fallen to a whole bunch of different people.

Normally I don't get too excited about malware, cause, well, that kind of thing is somebody else's problem. But Flame is special: if reports are correct, this is the first piece of malware ever to take advantage of an MD5 certificate collision to enable code signing. Actually, it may be the first real anything to use MD5 certificate collisions. Neat stuff.

What we know is pretty sketchy, and is based entirely on the information-sparse updates coming from Microsoft's security team. Here's the story so far:

Sometime yesterday (June 3), Microsoft released an urgent security advisory warning administrators to revoke two Intermediate certificates hanging off of the Microsoft root cert. These belong to the Terminal Services Licensing service. Note that while these are real certificates -- with keys and everything -- they actually have nothing to do with security: their sole purpose was to authorize, say, 20 seats on your Terminal Services machine.

Despite the fact that these certs don't serve any real security purpose, and shouldn't be confused with anything that matters, Microsoft hung them off of the same root as the real certificates it uses for, well, important stuff. Stuff like signing Windows Update packages. You see where this is going?

Actually, this shouldn't have been a real problem, because these licensing certs (and any new certificates beneath them) should have been recognizably different from code-signing certificates. Unfortunately, someone in Product Licensing appears to have really screwed the pooch. These certificates were used to generate Licensing certificates for customers. And each of those certificates had the code signing bit set.

The result? Anyone who paid for a Terminal Services license could (apparently) sign code as if they were Microsoft. If you're wondering what that looks like, it looks like this.

This is obviously a bad thing. For many reasons. Mikko Hypponen points out that many organizations whitelist software signing by Microsoft, so even if institutions were being paranoid, this malware basically had carte blanche.

Ok, so far this is really bad, but just a garden-variety screwup. I mean, a huge one, to be sure. But nothing really interesting.

Here's where that changes. Just today, Microsoft released a new update. This is the new piece:
The Flame malware used a cryptographic collision attack in combination with the terminal server licensing service certificates to sign code as if it came from Microsoft. However, code-signing without performing a collision is also possible.
Now, I'm not sure why the 'collision attack' is necessary if code-signing is possible without a collision. But who cares! A collision attack! In the wild! On top-secret government-issued Malware! Dan Brown couldn't hold a candle to this.

If we look at the Licensing certificates in question, we do indeed see that at least one -- created in 2009 -- uses the MD5 hash function, something everyone knows is a bad idea:
Certificate:    Data:        Version: 3 (0x2)        Serial Number:            3a:ab:11:de:e5:2f:1b:19:d0:56    Signature Algorithm: md5WithRSAEncryption        Issuer: OU=Copyright (c) 1997 Microsoft Corp., OU=Microsoft Corporation, CN=Microsoft Root Authority        Validity            Not Before: Dec 10 01:55:35 2009 GMT            Not After : Oct 23 08:00:00 2016 GMT        Subject: C=US, ST=Washington, L=Redmond, O=Microsoft Corporation, OU=Copyright (c) 1999 Microsoft Corp., CN=Microsoft Enforced Licensing Intermediate PCA        Subject Public Key Info:            Public Key Algorithm: rsaEncryption                Public-Key: (2048 bit)
And so... Well, I don't really know. That's where we run out of information. And end up with a lot of questions.

For one thing, why did the Flame authors need a collision? What extra capabilities did it get them? (Update: the best theory I've heard comes from Nate Lawson, who theorizes that the collision might have allowed the attackers to hide their identity. Update 6/6: The real reason has to do with certificate extensions and compatibility with more recent Windows versions. See the end of this post.)

More importantly, what kind of resources would it take to find the necessary MD5 collision? That would tell us a lot about the capabilities of the people government contractors who write malware like this. In 2008 it took one day on a supercomputer, i.e., several hundred PS3s linked together.

And just out of curiosity, what in the world was Microsoft doing issuing MD5-based certificates in 2009? I mean, the code signing bit is disastrous, but at least that's a relatively subtle issue -- I can understand how someone might have missed it. But making MD5 certificates one year after they were definitively shown to be insecure, that's hard to excuse. Lastly, why do licensing certificates even need to involve a Certificate Signing Request at all, which is the vector you'd use for a collision attack?

I hope that we'll learn the answers to these questions over the next couple of days. In the mean time, it's a fascinating time to be in security. If not, unfortunately, a very good time for anyone else.

***

Update 6/5: For those who aren't familiar with certificate collision attacks, I should clear up the basic premise. Most Certificate Authorities sign a Certificate Signing Request (CSR) issued by a possibly untrustworthy customer. The idea of an MD5 collision-finding attack is to come up with two different CSRs that hash to the same thing. One would be legitimate and might contain your Terminal Services licensing number. The other would contain... something else. By signing the first CSR, Microsoft would be implicitly creating a certificate on the second batch of information.

Finding collisions is a tricky process, since it requires you to muck with the bits of the public key embedded in the certificate (see this paper for more details). Also, some CAs embed a random serial number into the certificate, which really messes with the attack. Microsoft did not.

Finally, some have noticed that Microsoft is still using a root certificate based on MD5, one that doesn't expire until 2020, and hasn't been revoked. What makes this ok? The simple answer is that it's not ok. However, Microsoft probably does not sign arbitrary CSRs with that root certificate, meaning that collision attacks are not viable against it. It's only the Intermediate certs, the ones that actually sign untrusted CSRs you need to worry about -- today.

***

Update 6/6: Microsoft has finally given us some red meat. Short summary: the Terminal Services Licensing certs do work to sign code on versions of Windows prior to Vista, with no collision attack needed. All in all, that's a heck of a bad thing. But it seems that more recent versions of Windows objected to the particular X.509 extensions that were in the TS licensing certs. The government (or their contractors) therefore used a collision attack to make a certificate that did not have these extensions. From what I'm reading on mailing lists, the collision appears to be "new, and of scientific interest".

If this pans out, it's a big deal. Normally the government doesn't blow a truly sophisticated cryptanalytic attack on some minor spying effort. Getting MD5 collisions up and running is a big effort; it's not something you can do from a cookbook. But developing novel collision techniques is a step beyond that. I take this to mean that Flame's authors were breaking out the big guns, which tells us something about its importance to our government. It also tells us that the creation of Flame may have involved some of the top mathematicians and cryptographers in (our?) intelligence services. This is an important datapoint.

***

Update 6/7: It looks like it does pan out. Marc Stevens -- one of the authors of the earlier MD5 certificate collision papers -- confirms that Flame's certificate uses a novel collision-finding technique.
"Flame uses a completely new variant of a ‘chosen prefix collision attack’ to impersonate a legitimate security update from Microsoft. The design of this new variant required world-class cryptanalysis"
We can only speculate about why this would be necessary, but a good guess is that Flame is old (or, at very least, the crypto techniques are). If the collision technique was in the works prior to 2009 (and why wouldn't it be?), Stevens et al. wouldn't have had much relevance. For those who like details, a (very incomplete) timeline looks something like this:
  1. Mid-1990s: MD5 is effectively obsolete. Switch to a better hash function!
  2. August 2004: Wang and Yu present first ('random'), manually-found collision on MD5.
  3. 2004-2007: Collisions extended to 'meaningful' documents. Lots of stuff omitted here.
  4. May 2007: (CWI) Stevens et al. publish a first impractical technique for colliding certificates.
  5. Late 2008: (CWI) Stevens et al. develop a practical certificate collision.
  6. December 2008: Vendors notified.
  7. Feburary 2009: Paper submitted to CRYPTO.
  8. Summer 2009: Source code released.
Even assuming that CWI team had perfect security with their result, conference program committees are hardly built for secrecy -- at least, not from goverments. So it's reasonable to assume that the Stevens et al. technique was known by February 2009, possibly earlier. Which means that the Flame developers may have had their own approach under development sometime before that date.

The really interesting question is: when did secret collision research get ahead of the public, academic stuff? Post-2007? Post-2004? Pre-2004? I doubt we'll ever know the answer to this question. I sure would love to.

Update 6/12: Alex Sotirov has a great Summercon presentation with many of the details. The word from those who know is: don't take his $20,000 figure too literally. We know nothing about the techniques used to find this collision.

Sunday, June 3, 2012

Posts so far

The problem with blogs is that, well, they're weblogs. The 'good' posts dribble off the bottom, where they get mixed in with the bad, and nobody ever sees them again. The more crap I write, the worse this problem is.

To fight this -- and to prevent myself from writing the same post over and over again -- I thought it might be helpful to compile a list of a few posts that aren't too embarassing. If you're new here, you can treat this as a table of contents to this blog.

(Hey, there are things going on in the world! The CRYPTO list of accepted papers has finally been published! There are neat attacks on cryptographic tokens! I want to write about all of it, but I just don't have time today. So please accept these re-runs for now, and hopefully I'll have new content soon.)

On the mess that is our public-key infrastructure:
  1. The Internet is broken, can we please fix it? On Trustwave & MITM attacks.
  2. TACK, a proposal for dynamically 'pinning' certificates.
High-level intro posts:
  1. It's the end of the world as we know it, and I feel fine. Post-quantum crypto from 30,000 feet.
  2. What is the random oracle model and why should I care? An early series, a little embarrassing.
  3. Format preserving encryption. Or: how to encrypt a credit card number with AES.
  4. What's TLS Snap Start? and So long False Start. On two (now withdrawn) TLS extensions.
  5. The future of electronic currency. On anonymous e-cash.
  6. Offline security through CAPTCHAs. A neat old idea for preventing dictionary attacks.
  7. Poker is hard, especially for cryptographers. All about mental poker.
  8. Fully-Homomorphic Encryption. Unfortunately this is still unfinished...
How to use cryptography (in)securely:
  1. How (not) to use symmetric encryption. A few of the worst pitfalls.
  2. What's the deal with RC4? A history of attacks on an old stream cipher.
  3. How to choose an authenticated encryption mode. Very important!
  4. Random number generation, an illustrated primer. A look under the hood.
  5. Surviving a bad RNG. What to do if your (P)RNG isn't carrying its weight. 
  6. Circular security. A wonkier, more theoretical subject.
  7. On multiple encryption. Are you safer if you encrypt twice?
Crypto attack(s) of the week:
  1. On the BEAST attackNote: written before the details were made public.
  2. XML Encryption. Why you should authenticate your ciphertexts.
  3. Side-channel attacks on DESFire. Neat.
  4. Datagram TLS. Alfardan & Paterson show that  timing attacks are (still) practical.
  5. 2011 Redux. A quick summary of a whole year.
  6. Satellite phone encryption is terrible. Attacks on two satphone ciphers. 
  7. The story of EAX(prime). And why security proofs are like Knight Rider.
  8. A tale of two patches. Analyzing two recent OpenSSL bugs.
Rants:
  1. Digital Fortress: I read it so you don't have to. Dan Brown embarrasses cryptography.
  2. Bram Cohen corrected. In which I randomly flame Bram Cohen.
  3. Bram Cohen corrects me? Bram turns out to be a good sport.
  4. Why Antisec matters. The security industry is a joke?
When I though this blog was a book:
  1. Introduction
  2. Where things fall apart: PrimitivesProtocols