Friday, September 30, 2011

Bram Cohen Corrected

Update 11/24/11: Bram has responded my criticisms in this post. See the comments section below for his thoughts and my response. Also, see this new post for my more detailed response.

I was innocently Googling the title of my blog (yes, I do this, it's sad) when I came across this May 2011 post by Bram Cohen entitled "Practical Cryptography Corrected".

Ostensibly the post is a criticism of Bruce Schneier's book Practical Cryptography.  Instead, it's proof that inventing the world's most useful peer-to-peer protocol does not make you a cryptographer.  More worrisome than that, it's an example of someone with a big megaphone using it to give really bad advice.

Bram's post has ten points, of which four are (somewhat) problematic.  Let's take them one at a time.
Bram's Practical Advice #1: For an RSA exponent, you should always use 2. Technically that's Rabin-Williams, and requires slightly different implementation, but that actually works in its favor. Rabin-Williams has a reduction to factoring, RSA does not.
No, no, no.  A thousand times no.

If you're going to encrypt with RSA, you should encrypt with RSA.  That means using a standard, well-tested RSA implementation with the traditional RSA public exponent (e = 0x10001).*

Unless you're an expert, do not go switching your exponents around.  Rabin-Williams (public exponent 2) looks like RSA, but it's a very different cryptosystem --- both in how decryption works and in what goes wrong if you screw up your implementation.

For one thing, when you implement Rabin-Williams incorrectly, it becomes vulnerable to a chosen ciphertext attack that completely recovers the scheme's secret key.  RSA does not have this problem.  This issue can be managed through skillful use of padding**, but your typical from-scratch implementer is not skillful.  Even bad implementations of good padding can lead to problems.  Why chance it?

Which brings me to my first law: The probability that your RSA ciphertext will be broken via some number-theoretical attack is vanishingly small.  The probability that it will be broken because you screwed up your implementation, on the other hand, is high.  All current approaches to solving the RSA problem work by attacking the factoring problem anyway, so it's unclear what advantage we're buying ourselves here.
Bram's Practical Advice #2: You should always do encryption as a layer outside of authentication.
Cryptographers have been arguing the order of encryption and authentication for nearly thirty years.  Should you encrypt first, then apply authentication (a MAC)?  Or should you authenticate first, then encrypt?

By 2000 we had basically put this question to bed.  The recommended best practice?  Always encrypt first (with a strong cipher and mode of operation), then apply a MAC to the resulting ciphertext.  This isn't just easy to remember, it's provably secure against adaptive chosen ciphertext attacks.  That means you're protected against very real issues like padding oracle attacks.  Bram's technique doesn't give you this, and that can spell disaster.
Bram's Practical Advice #3: For an encryption mode, you should always use CTR, and always use a nonce of zero, and never reuse keys.
Bram's Practical Advice #4: You should never use the same key for both encryption and authentication. If you need both encryption and authentication, you should use two keys. One for encryption, and one for authentication.
This one's a twofer, and technically none of it is bad advice.  I'm a fan of CTR mode, and zero IVs are just fine in CTR (but never in CBC!) mode.

I have only two points to make: first, why not use a random IV?  What's going to happen to you if you hard-code a zero IV, and then the next (idiot) developer switches your mode of operation to CBC?  This shouldn't happen, but it might.

But more generally, my advice is not to use CTR at all.  Use a provably-secure authenticated mode of operation like CCM, OCB or (best) GCM, in the way recommended by standards.  These modes handle both encryption and authentication for you, and better yet --- they all use a single key (by design).  You're much better off using these standards than hand-rolling your own encrypt/MAC approach (especially if --- per Bram's Advice #2) you plan on doing it wrong.

In Summary

I have a lot of respect for Bram, and if I come off a bit critical in the post above, it's isn't because I dislike the guy.  In fact, many of his remaining points are good.

But the only thing worse than bad advice is bad advice from a voice of authority.  And Bram has a lot of authority, which could lead people to trust him over some goofy, non-famous cryptographer (or even Bruce Schneier, for some crazy reason).

Fixing people's cryptographic mistakes has given me a good living so far, but that doesn't mean I like it.  So please, please don't listen to Bram.  Do things right, follow standards, and most importantly: if you don't know exactly what you're doing, don't do it at all.

Notes:

* I initially wrote "e = 100001" which is totally bogus.  Goes to show you shouldn't trust a blog post.  Thanks to GSBabil.

** Use of padding, incidentally, completely breaks the reduction to factoring unless you conduct the proof in the Random Oracle Model.  And as I discuss here, there are some serious problems with that.

Thursday, September 29, 2011

What is the Random Oracle Model and why should you care? (Part 1)

This is the first in a series of posts on the Random Oracle Model.  This might get a little wonky for some people's tastes, so if you're not interested in provable security, no problem.  I'll be back with more on software and physical security once I get this out of my system.

An oracle.
For the most part I like to talk about how cryptography gets implemented.  But implementation doesn't matter a lick if you're starting with insecure primitives.

It happens that I'm scheduled to teach a class today on provable security, and something called the "Random Oracle Model".  While getting my thoughts together, it occurred to me that a) this subject might be of interest to people who aren't my grad students, and b) there doesn't seem to be a good "layperson's" introduction to it.

So at the risk of alienating my tiny readership, I'm going to take a break from our regularly scheduled "how cryptographic systems go bad" programming to talk about this wonkish subject --- which, as we'll see in a bit, is itself just another flavor of how crypto goes wrong.

Hash Functions for Dummies

Before we talk about Random Oracles, we first need to discuss hash functions.  These are functions that take a (potentially) long input, and 'hash' it down into a fixed-size value that we refer to as a Message Digest.

We use hash functions a lot in cryptography.  They get the most press for turning up in digital signature schemes, but they also make an appearance in many encryption schemes.  In fact, throughout the field, it's pretty hard to throw a stone without hitting one.

For the most part, a cryptographic hash function has to satisfy at least some of the following properties.  First of all, we expect it to be "one way", meaning that it's hard to 'invert' the function, i.e., to find the original message given only a Message Digest.

Secondly, a hash function should be "collision resistant".  In other words, it should be hard to find two different messages (M, M') such that Hash(M) == Hash(M').  This property is very important for digital signatures, since we typically don't sign the message directly, we sign a hash of the message.  If an adversary can find two messages that hash to the same value, then a signature on (the hash of) one message is also a signature on the other.  In practice this can lead to bad consequences.

Sometimes we demand even more from our hash functions.  The tricky part is knowing how much more we can ask.  To illustrate, let me give an example.

Encryption in Delphia

Imagine that you live in Delphia, a nation that bans all encryption algorithms.  Despite this, you have secrets to keep.  You notice that your government has not banned hash functions.  This gives you an idea: perhaps you can 'hack' an encryption scheme out of a hash function.

This isn't crazy.  The outputs of many hash algorithms look pretty random.  Maybe you could use a hash function to build a stream cipher.  Basically, you'll use a hash function to hash a secret key (along with some kind of counter value); this will result in a stream of 'random looking' bits which you can then XOR with your plaintext.

The question is: would a "one-way" and "collision-resistant" hash function be sufficient to ensure that this scheme is secure?  I'll give you a blunt hint: no.  Technically neither of these properties implies that the output of a hash function be 'random looking'.  You can conceive of hash functions that meet those guarantees, and yet produce outputs that would be useless for building a stream cipher (they might, say, contain long strings of predictable values).  If you use these to build your cipher, you'll be terribly disappointed.

Spherical horses

When cryptographers first started pondering problems like the above, their question was what they wanted from an "ideal" hash function.  To a mathematician, the answer turned out to be obvious.  They wanted it to behave like a random function.  That term has a very specific mathematical definition; for the purposes of this discussion we'll use an equivalent one that's easier to talk about.

Imagine that we wanted to implement an ideal hash function.  Instead of designing a small algorithm that uses confusion and diffusion (as we do with most real hash functions), we might instead create a big hard-coded table.  The table would have the input messages in one column, and the corresponding outputs on the other.

The important thing about this table is that every single output value is an independent, random string.

       Input (binary)                  Output                      
       0 Totally random bitstring 1
       1 Totally random bitstring 2
       00   Totally random bitstring 3
       01 Totally random bitstring 4
       10 Totally random bitstring 5
       11 Totally random bitstring 6
       and so on...

Obviously the full table for a useful hash function would be pretty big.  How big?  Well, SHA1 accepts messages up to 2^64 bytes in length.  So the answer is: too big to fit in this blog.  In fact, there would be more entries in the full table than there are atoms in the universe.

Even if we had room to store such a table, the process of hashing --- looking up an input and finding the output --- would be awfully time consuming.  If we stored our hash table on the tape of a Turing machine (the quintissential computer), just moving the tape head to the right place would take (on average) a number of time steps that's exponential in the size of the input.

From time to time when I mention all of this in my intro course, some ambitious student tries to come up with a clever way to shrink the table down into something that we can carry around.  But here's the problem: the set of output strings are perfectly random.  Finding a way to represent them in a compact hash would be equivalent to compressing random data.  Unfortunately, information theory tells us that this is a big no-no.

Doubling down

Let's take stock for a moment.  So far I hope that I've convinced you of at least two things.  First, cryptographers would really like their hash functions to behave like random functions.

Second, "real" hash functions can't be random functions, because that would be totally unworkable and impractical.  If you look at practical hash functions like SHA1, SHA512 and RIPEMD160 --- all of which have nice, short implementations consisting of a few KB of code --- it should be obvious that these are not random functions, no matter how 'random' the outputs look.

Cambridge, Massachusetts, 10:35am

So if we can't use random functions in practice, what about an alternative approach?  Perhaps we could model our hash functions as random functions, just for the purposes of the security proof.  Then in real life we'd substitute in SHA or RIPEMD or some practical hash function.  It's not totally clear that the security proof would still mean anything, but at least we'd be doing something.

I'll describe a scene as Hollywood would present it: an unrealistically well-lit office at MIT.  A noted cryptographer, played by Steve Buscemi, has just proposed to model hash functions as random functions.  His colleague, played by Kate Winslet, sets him straight.

"Can't do it," she explains, shaking her head sadly, "it's not just inconvenient that it takes exponential time to evaluate a random function.  If we allowed this sort of hashing in our security proof, we would have to let the parties --- including the Adversary --- run for an exponential number of time steps.  But we can't do that.  Our entire security proof is based on the idea that the parties can only perform limited computations, specifically those that can be conducted in polynomial-time.  If let the parties peform arbitrary exponential-time computations, then adversary could do anything: he could even brute-force the key."

(Hollywood would probably punch up this dialog a bit.)

The breakthrough comes from a passing janitor (Matt Damon, reprising his role from Good Will Hunting): "What if you just create an imaginary world where everyone is limited to polynomial time computations, but there's a special exception for hashing.  Hashing, and hashing alone, doesn't take any time at all.  When you try to hash something, the clock stops."

Towards a paradigm

And so this is where we are.  The perfect hash function would be a random function.  But random functions are impractical --- we can't use them in real life, or even in our security proofs without breaking them.

Cryptographers are stubborn people, and we have good imaginations.  So we've come up with a daydream, a way to pretend that random functions are practical --- just for the purposes of our security proof.

The implications of this daydream turn out to be both wonderful and terrible.  On the one hand, it allows us to prove the security of schemes that can't be proven in other ways.  Furthermore, we can do it very efficiently.  On the other hand, it leads to security proofs that are, in a technical sense, totally meaningless.

After all, we will ultimately have to implement the scheme with a real hash function like SHA, which is decidedly not random.  What does a random oracle security proof buy us then?  This is one of the big questions that has plagued cryptographers since this idea came about.

And if you think that this is all academic, you'd be wrong.  The security proofs for some of the most widely-used cryptographic primitives are set in this model: this includes most RSA signing and encryption, as well as the Elgamal signature scheme upon which DSA is based.

If you take nothing else away from this post, you should notice that the schemes listed above are important.  If we're doing something funny in the security proofs, people should understand what it is.

In my next post I'll actually explain what a "random oracle" is, and how it relates to the crazy assumptions we made above.  I'll also explain how we can prove things in this model, and talk about why cryptographers love to hate it so much.

For the next post in this series, click here.

Oldtimers vs. whippersnappers (code optimization edition)

I can't even begin to sum this up.  The reply is the best part.

Saturday, September 24, 2011

Where Things Fall Apart: Protocols (Part 2 of 2)

This is the fourth installment in a series on cryptographic engineering.  You can see the previous posts here:


Test Drive

Your vehicle ignition system is the result of a Darwinian arms race between the people who build cars -- and those who like to drive them.

Vehicle ignition switches through history.  From left: vintage floor-mounted ignition button; 60s-era cylinder lock; 90s/2000-era high security key; 2011 dash-mounted ignition button.
The very first electronic vehicle ignition was nothing more than a switch that completed an electrical circuit. This worked fine in small towns and out on the farm, but things weren't so simple in the big city. So manufacturers adapted, first adding a mechanical lock cylinder then hardening the wiring. This worked for a while, until inevitably the thieves got smarter. Worse, at this point the answer wasn't so obvious: ignition lock technology stagnated. By the late 1980s and early 1990s, vehicle theft was a multi-billion dollar industry.

A few luxury manufacturers tried to improve the physical security of their locks using high-security keys and non-standard templates. But for most manufacturers, however, there was already a more promising approach at hand. Cars themselves were becoming reliant on microcontrollers for engine control. Why not use a digital lock?

The result is the vehicle immobilizer. A typical first-gen immobilizer used a small chip embedded into the head of the car key. This chip had a single purpose: when the driver inserted the key into the lock cylinder, it would squirt out a code (or serial number), which could be received by an antenna in the lock cylinder. If this code matched what the vehicle expected to see, the engine would start. Expressed as a protocol, the transaction looked like this:

Immobilizers effectively shut down traditional hotwiring and lock-picking. But they had a fatal flaw that criminals soon discovered. Since the code never changed, someone with the right equipment could eavesdrop on the communication (or borrow your keys), and later replay it to the car. This sounds complicated, but quickly became practical thanks to inexpensive devices called "code-grabbers".

Once again manufacturers adapted. The next generation of immobilizers dropped the fixed key in favor of a simple challenge/response authentication protocol. In this approach, the immobilizer chip and car share a cryptographic key of some sort. When you insert your car key, the car generates a random "challenge" number and sends it to the key. The chip in your car key uses the cryptographic key to compute a response based on the challenge. This tops code-grabbers, since the key itself never goes over the air, and the challenge always changes.

Challenge response protocol between vehicle and immobilizer key.  The key and car share a deterministic cryptographic algorithm F() and a secret key. The car computes F(key, challenge) and compares it to the response value.
So we've laid out a cryptographic protocol, but it's a simple one, and one that's likely to be effective. What could go wrong?

40 bits of personal history

Back in 2005, along with some colleagues, I looked at the immobilizer system used in a few million Ford, Toyota and Nissan vehicles. This particular system was based on a Texas Instruments chip called the Digital Signature Transponder (DST), a tiny RFID chip with a cipher F() and a 40-bit secret key.
Two DST form factors.  The big one is a Mobil Speedpass, which also relies on the DST technology.
The DST uses exactly the challenge-response protocol I describe above. The reader (car) sends it a 40 bit challenge, the DST encrypts that value with its cipher, truncates the result down to a 24 bit response, ships it back. The car also has a copy of the secret key which it uses to verify the response.

The problem with the DST is not the protocol. Rather, it's the number I mentioned above: 40. As in 40-bit key length. If an adversary -- say, a malicious parking attendant -- borrows your car key, she can issue a challenge to the chip. After collecting the response, she can, at her leisure, test every single one of the approximately 1.1 trillion possible Immobilizer keys until they find one where F(key, challenge) is equal to the response they got from your DST chip.** This sounds hard, but it only takes a few hours on an FPGA.

This process is called "cloning". It's not the scariest attack since, in general, it requires the adversary to get your car key, or at least get close enough to scan it.

Now we come to the interesting part. Texas Instruments was well aware of the DST's keysize limitation. To foil this attack, they deployed a new chip called the DST+. Rather than simply replace the weak 40-bit algorithm with a better one, which would have solved the problem, they decided to address cloning attacks using a protocol.

DST+ Mutual Authentication protocol.  From a presentation in the  Fourth Conference on the Advanced Encryption Standard (AES) (2004). 
What I know about the DST+ protocol comes from a public presentation posted by a Dr. Ulrich Kaiser from Texas Instruments Germany. I freely admit that I've never verified this diagram against a real DST+, so caveat emptor.

The diagram is a little confusing: let's walk through it step by step. The car ("Reader") is on the left, and the DST+ chip ("Transponder") is on the right. For the most part it's exactly the same as the DST: the car generates a 40-bit challenge and sends it over to the chip.  The chip encrypts the challenge under its secret ("Immobilizer") key, truncates the result down to 24 bits, and sends back the response.

The new stuff is "tacked on" to the original protocol.  long with the challenge, the car transmits an extra 24 bits that it derives by: 1) encrypting its own challenge under the Immobilizer key, 2) encrypting the result again under a second "Mutual Authentication Key" (MAK), and 3) truncating that  result down to 24 bits.

Since the DST+ chip shares both keys, it can verify that the car's transmission is correct before it responds. If the value isn't correct, the DST+ clams up. No response for you!  

In principle this stumps our malicious valet.  He doesn't know the right keys.  He can send the DST+ all the challenges he wants -- but it won't answer him.  No responses, no attack.

All's well that ends well?

A first observation is that the DST+ protocol only protects against challenges sent by an unauthorized reader. If our valet can eavesdrop on the communication between the DST+ and the legitimate reader in the car, he can still obtain a (challenge, response) pair.  Since these values are identical to those in the original DST protocol, the same attacks apply. He can use an FPGA to brute force the 40-bit Immobilizer key.

Here's something else. Once he's got the car's Immobilizer key, he can go back and find the Mutual Authentication Key (MAK).  Given the challenge sent by the car, along with the 24-bit "additional authentication" string, he can:

  1) compute I = F(Immobilizer key, challenge),
  2) use the FPGA to test every single possible MAK value
  3) stop when he finds a MAK value s.t. F(MAK, I) matches the "additional authentication".

This might seem a little pointless. After all, we already have the Immobilizer key, which is all we need to simulate a DST+ and thus start the car. Why bother going back for the MAK?

Into hypothetical territory

Yet imagine... What if a car manufacturer made a tiny mistake.  What if, speaking hypothetically, the manufacturer decided to use a single MAK across many different cars --- say, every 2009 Toyota Camry? A tiny, harmless optimization.

And yet.

We know that knowledge of the Immobilizer Key makes it easy to find the car's MAK.  But this works the other way, too. If many cars share a MAK, then anyone who knows that value can use it to derive the Immobilizer key for a car.

Even better (or worse, depending on your point of view) our attacker can do this without ever seeing the car key at all.  All you need is a challenge value and "additional authentication" value, both of which the car will give to you. The owner can be fast asleep with his keys safe on his nightstand next to him. You're outside stealing his car. 

So in other words, if you use the DST+ mutual authentication key, and make the small mistake of re-using a MAK across multiple vehicles, you've transformed a mild key cloning attack into something much worse. People can now steal your car even without scanning your key.

Please keep in mind that all of this is hypothetical and speculative. But the re-use of a MAK key could happen.  There's evidence that it may have, at least in the past. What it goes to show that if you're not very careful about your goals and security properties, protocols can do unexpected things.  They can make you less secure. 

Rolling it up

These posts were not intended to be an in-depth tutorial on the mysteries of protocol design and analysis.  I do hope to talk about that more in the future.  So far we've barely scratched the surface of what can go wrong in a cryptographic protocol.  And these are certainly not the best examples of "bad" protocols.

Instead, the purpose of this discussion was to provide a couple of case studies involving real protocols whose failure has implications for millions of people.  It was also to show you how tiny changes to a protocol can have a significant impact.

In the next few installment of this overview series we'll look a bit at hardware, physical security, and the kind of things that can go wrong even when you build the best machines with the best intentions.

Footnotes:

** Note, since the response is only 24 bits long, there's a possibility of collisions, i.e., many different keys will satisfy truncate(F(key, challenge)) == response.  To deal with this problem, all you need to do is obtain a second (challenge, response) pair and weed out the false positives.

Friday, September 23, 2011

Where Things Fall Apart: Protocols (Part 1 of 2)

This is the third installment in a series on cryptographic engineering.  You can see the previous posts here:


Protocols

Once upon a time there were no cryptographic protocols. Most cryptographic communication took the form of one-move operations like this one:
There are plenty of things that can go wrong in this scenario. But if you're the sender (as opposed to the sap carrying the message) you could restrict your concerns to the security of the primitive (e.g., cipher), or perhaps to the method of key distribution.  Admittedly, through most of history it was almost impossible to get these things "right".  Still, you knew where your weaknesses were.

Fast forward to present day. Modern cryptographic systems are almost never content with simple one-way communication. Consider the simple act of plugging in your high-definition TV via a digital connection. Thanks to modern rights management protocols such as HDCP and DTCP, this can lead to communication flows like this:

Not only is the communication here vastly more complicated, but it takes place in the presence of a more dangerous adversary.  For one thing, he owns the communication channel and the hardware --- it's probably sitting in his living room or lab.  Most importantly, he's got time on his hands.  Your typical barbarian only got one chance to capture a ciphertext.  If this guy needs to, he can run the protocol 100,000 times.

Now, most people will tell you that cryptography is one of the most settled areas of computer security.  Relative to the disaster that is software security, this is true.  Particularly in when it comes to standardized cryptographic primitives, we've made enormous progress --- to the point where attacks lag by years, if not decades (though, see my previous post).  If you use reasonable primitives, your attacker will not get you by breaking the algorithm.

As Gibson said, the future is here, but it's not evenly distributed. And in the field of cryptography, protocol design is one area in which progress has definitely not been as fast. We've learned a huge amount about what not to do.  We have tools that can sometimes catch us when we trip up.  Unfortunately, we still don't have a widely-accepted methodology for building provably-secure protocols that area as practical as they are safe to use.

Maybe I'm getting ahead of myself.  What do I even mean when I refer to a "cryptographic protocol"?

For the purposes of this discussion, a protocol is an interactive communication between two or more parties. Obviously a cryptographic protocol is one that combines cryptographic primitives like encryption, digital signing, and key agreement to achieve some security goal.

What makes cryptographic protocol special is the threat.  When we analyze cryptographic protocols, we assume that the protocol will be conducted in the presence of an adversary, who (at minimum) eavesdrops on all communications. More commonly we assume that the adversary also has the ability to record, inject and modify communications flowing across the wire.  This feature is what makes cryptographic protocols so much more interesting --- and troublesome --- than the other types of protocol that crop up in computer networking.

A first example: SSL and TLS

SSL and its younger cousin TLS are probably the best-known security protocols on the Internet. The most recent version is TLS 1.2 and we think it's a pretty solid protocol now.  But it didn't reach achieve this status in one fell swoop. The history of each SSL/TLS revision includes a number of patches, each improving on various flaws found in the previous version.

The original SSL v1 was a proprietary protocol created by Netscape Inc. back in 1994 to secure web connections between a browser and a web server. SSL v2 was standardized the next year. The protocol is a highly flexible framework by which two parties who have never met before can authenticate each other, agree on transport keys, and encrypt/authenticate data. Part of the flexible nature of the protocol is that it was designed to support many different ciphersuites and modes of operation. For example, it's possible to configure SSL to authenticate data without encrypting it, even though almost nobody actually does this.

Now let me be clear. There's nothing wrong with flexibility, per se. The benefits of flexibility, extensibility and modularity are enormous in software design --- there's nothing worse than re-factoring 100,000 lines of production code because the original designers didn't consider the possibility of new requirements. It's just that when it comes to building a secure cryptographic protocol, flexibility almost always works against you.

Let's consider a concrete example. The SSL v2 protocol included a very nice feature known as "ciphersuite negotiation". This allowed both parties to negotiate which ciphers they wanted to use from a (potentially) distinct list of candidates that each one supported. In practice, some of those ciphers were likely to be better than others. Some were designed to be worse: for example, they were created to support export-controlled browsers that max out at a 40-bit key length.

There's nothing fundamentally wrong with ciphersuite negotiation. It clearly gives a lot of flexibility to SSL, since the protocol can easily be upgraded to support new ciphersuites without breaking compatibility with older implementations.  From the spec, you can see that this section of the protocol was treated as something of an abstraction by the designers. The negotiation is handled via its own dedicated set of messages.  Looking at this, you can almost hear some long-ago dry erase marker scribbling "ciphersuite negotiation messages go here".

Unfortunately, in providing all of this flexibility, the designers of SSL v2 essentially created many implicit protocols, each of which could be instantiated by changing the details of the ciphersuite negotiation process. Worse, unlike the messages in the "main" protocol, the ciphersuite negotiation messages exchanged between client and server in the SSLv2 protocol were not authenticated.

In practice, this means that our adversary can sit on the wire in between the two, replacing and editing the messages passing by to make it look like each party only supports the lowest common ciphersuite. Thus, two parties that both support strong 128-bit encryption can be easily tricked into settling on the 40-bit variant.  And 40 bits is not good.

Now, the good news is that this vulnerability is a piece of history.  It was closed in SSL v3, and the fix has held up well through the present day.  But that doesn't mean TLS and SSL are home free.

This post is continued in the next section: "Where Things Fall Apart: Protocols (Part 2)".

Tuesday, September 20, 2011

A diversion: BEAST Attack on TLS/SSL Encryption

Update 9/22/11: It appears that OpenSSL may have actually written a patch for the problem I describe below, all the way back in 0.9.6d (2002), so this attack may not be relevant to OpenSSL systems.  But I haven't reviewed the OpenSSL code to verify this.  More on the patch at the bottom of this post.

Update 10/2/11: Yes, the attack works pretty much as described below (and see comments).  To advise future standards committees considering non-standard crypto, I have also prepared this helpful flowchart.

There's some news going around about a successful attack on web browsers (and servers) that support TLS version 1.0.  Since this is ostensibly a blog, I figured this subject might be good for a little bit of real-world speculation.

My first thought is that "attacks" on TLS and SSL need to be taken with a grain of salt.  The protocol has been around for a long time, and new attacks typically fall into one of two categories: either they work around SSL altogether (e.g., tricking someone into not using SSL, or going to an untrusted site), or they operate on some obscure, little-used feature of SSL.

(Please don't take this as criticism of the attacks I link to above.  At this point, any attack against TLS/SSL is a big deal, and those are legitimate vulnerabilities.  I'm trying to make a point about the strength of the protocol.)

This attack is by researchers Juliano Rizzo and Thai Duong, and if we're to believe the press, it's a bit more serious than usual.  Ostensibly it allows a man-in-the-middle to decrypt HTTPS-protected session cookies, simply by injecting a little bit of Javascript into the user's web browser.  Since we're constantly running third-party Javascript due to web advertising, this might not be a totally unrealistic scenario.

The attack is claimed to make use of a theoretical vulnerability present in all versions of SSL and TLS 1.0 (but not later versions of TLS).  Unfortunately there are few details in the news reports, so until the researchers present their findings on Friday, we have to rely on some educated guesses.  And thus, here's a big caveat:

Every single thing I write below may be wrong!

What the heck is a Blockwise-Adaptive Chosen Plaintext Attack?

Based on a quick Google search, the attack may be based a paper written by Gregory Bard at the University of Maryland.  The basic idea of this attack is that it exploits the particular way that TLS encrypts long messages using block ciphers.  To do this, Bard proposes using Javascript.

Besides being incredibly annoying, Javascript is a huge potential security hole for most web browsers.  This is mostly mitigated in browsers, which place tight restrictions on the sorts of things that a Javascript program can do.  In terms of network communication, Javascript programs can barely do anything.  About the only exception to this rule is that they can communicate back to the server from which they were served.  If the page is HTTPS-protected, then communication gets bundled under the same TLS communication session used to secure the rest of the web page.

In practice, this means that any data sent by the Javascript program gets encrypted under the same key that is also used to ship sensitive data (e.g., cookies) up from the web browser to the server.  Thus, the adversary now has a way to encrypt any message she wants, under a key that matters.  All she has to do is sneak a Javascript program onto the user's web page, then somehow intercept the encrypted data sent to the server by the browser.  This type of thing is known as a chosen plaintext attack.

Normally chosen plaintext attacks are not a big deal.  In fact, most encryption schemes are explicitly designed to deal with them.  But here is where things get interesting.

TLS 1.0 uses a block cipher to encrypt data, and arranges its use of the cipher using a variant of the Cipher Block Chaining mode of operation.  CBC mode is essentially a means of encrypting long messages using a cipher that only encrypts short, fixed-size blocks.

CBC mode encryption (diagram: Wikipedia).  The circular cross denotes the XOR operation.  The message is first divided into even-sized blocks.  A random Initialization Vector (IV) is used for the first block.  Each subsequent block is "chained" to the message by XORing the next block of plaintext with the previous block of ciphertext.
CBC mode encryption is illustrated above.  Note that it uses a block cipher as an ingredient.  It also depends on a random per-message nonce called an Initialization Vector, or IV for short.

If you asked me to describe the security properties of CBC, I would recite the following mantra: as long as you use a secure block cipher, and as long as you generate a new, random IV for each message, this mode is very secure against eavesdroppers --- even when the adversary can obtain the encryption of chosen plaintexts.  If you doubted me, I would point you here for a formal proof.

You may have noticed that I emphasized the phrase "as long as".  This caveat turns out to be important.

In the implementation of CBC mode, the TLS designers made one, tiny optimization.  Rather than generating a fresh IV for each message, the protocol re-uses the last block of the previous ciphertext message that was sent.  This value becomes the IV for the new message.

In practice, this tiny modification has huge implications.  If we assume that the adversary --- let's call her Sue --- is sniffing the encrypted data sent by the browser, then she can obtain the IV that will be used for the next message.  Now let's say she has also identified a different block of ciphertext C that she wants to decrypt, maybe because it contains the session cookie.  While she's at it, she can easily obtain the block of ciphertext C' that immediately precedes C.

None of this requires more than simple eavesdropping.  Sue doesn't know what message C encrypts --- that value, which we'll call M, is what she wants to learn --- but she does know that C can be expressed using the CBC encryption formula as:

      C = AES-ENC(key, M ⊕ C')

Sue also knows that she can get her Javascript program to transmit any chosen block of data M* that she wants.  If she does that, it will produce a ciphertext C* that she can sniff.

Now let's make one more huge assumption.  Imagine that Sue has a few guesses for what that unknown value of M might be.  It's 16 bytes long, but maybe it only contains a couple of bytes worth of unknown data, such as a short PIN number.  If Sue generates her value M* based on one of those guesses, she can confirm whether C actually encrypts that value.

To cut through a lot of nonsense, let's say that she guesses M correctly.  Then she'll generate M* as:

      M* = IV M C'

When her Javascript program sends out M* to be encrypted, the TLS layer will encrypt it as:

      C* = AES-ENC(key, IVM*) =                       (which we can re-write as...)
               AES-ENC(key, IVIVMC' ) =      (and XORing cancels...)
               AES-ENC(key, M ⊕ C')

All I've done above is write out the components of M* in long form, and simplify the equation based on the fact that the IVIV term cancels out (a nice property of XOR).  The important thing to notice is that if Sue guessed correctly, the ciphertext C* that comes out of the browser will be identical to the captured ciphertext C she wants to decrypt!  If she guessed wrong, it won't.

So assuming that Sue has time on her hands, and that there are only a few guesses, she can repeat this technique over and over again until she figures out the actual value of M.

But this guessing stuff is crazy!

If you're paying attention, you'll already have twigged to the one huge problem with this attack.  Yes, it works.  But it only works if she has a small number of guesses for the value M.  But in practice, M is 16 bytes long.  If she somehow knows the value of all but two bytes of that value, she might be able to guess the remaining bytes in about 2^15 (32,768) attempts on average.  But in the more likely case that she doesn't know any of the plaintext, she has to try on average about 2^127 possible values.

In other words, she'll be guessing until the sun goes out.

Let's get aligned

And this is where the cleverness comes in.  I don't know exactly what Rizzo and Duong did, but the paper by Bard hints at the answer.  Recall that when the browser encrypts a CBC-mode message, the first step is to chop the message into equal-length blocks.  If it's using AES, these will each be 16 bytes long.

If Sue needs to guess the full contents of a random 16-byte block, she's hosed --- there are too many possibilities.  The only way this technique works in general is if she knows most of a given block, but not all of it.

But what if Sue could convince the browser to align the TLS messages in a manner that's useful to her?  For example, she could fix things so that when the browser sends the cookie to the server, the cookie would be pre-pended with something that she knows --- say, a fixed 15-byte padding.  That would mean that the stuff she wants to learn --- the start of the cookie --- takes up only the last byte of the block.

If Sue knows the padding, all she has to guess is the last byte.  A mere 256 possible guesses!

Now what if she can repeat this trick, but this time using a 14-byte known padding. The block would now end with two bytes of the cookie, the first of which she already knows. So again, 256 more guesses and she now knows two bytes of the cookie!  Rinse, repeat.

This technique could be quite effective, but notice that it relies on some strong assumptions about the way Sue can get the browser to format the data it sends.  Does the Rizzo and Duong attack do this?  I have absolutely no idea.  But based on my understanding of the attack and the descriptions I've read, it represents my best speculation.

I guess pretty soon we'll all know how good I am at guessing.

Update 9/22/11: This post by James Yonan offers the first piece of confirmation suggesting that BEAST is based on the Bard attack.  Yonan also points out that OpenSSL has included a "patch" to defend against this issue, all the way back to version 0.9.6d (2002).  In a nutshell, the OpenSSL fix encrypts random message fragments at the start of each new message sent to the server.  This additional message fragment works like an IV, ensuring that M* is no longer structured as I describe above.  Yonan also notes that NSS very recently added a similar patch, which indicates that NSS-based browsers were the problem (this includes Firefox, Chrome).


I'm usually critical of OpenSSL for being a code nightmare.  But (pending verification) I have to give those guys a lot of credit for this one.

Sunday, September 11, 2011

Where Things Fall Apart: Primitives

This is the second installment in a series on cryptographic engineering.  See here for the Introduction.

In the previous post I argued that cryptographic systems often fail, at least in part because there are so many places for them to go wrong.  Over the next several of posts I'm going to give a few examples.  This section is intended as an overview rather than a deep examination of the issues.

Primitives: Making a call

The fundamental unit of any cryptographic system is the cryptographic primitive --- codes, ciphers, digital signing algorithms. Traditionally, attacking a cryptographic system meant attacking the primitive itself. History is littered with broken primitives, including many that were believed to be "unbreakable" until, well, they weren't.

This isn't purely a history lesson. Many of these broken primitives are still with us today. To find one, all you need to do is take our your cellphone. The most common cellular technology in use today is GSM (Global System for Mobile), which was developed by a consortium of European telecoms, equipment manufacturers and national governments in the 1980s. In a move that was very advanced for the time, the GSM designers included a built-in cipher called "A5", which was designed to protect the confidentiality of calls transmitted between a phone and the cell tower. Call encryption represented a huge improvement over the prevailing analog cellphone technology of the day, which provided essentially no privacy for voice calls (with sometimes embarrassing results).


The A5/1 cipher (courtesy Wikipedia).

Ross Anderson describes the selection process for A5/1 in his excellent book. By his account, European governments were divided into two factions: those that wanted to strongly protect call confidentiality (mostly those states bordering the Warsaw pact) and other states, such as France, who wanted to simplify government eavesdropping. The weak cipher faction won the day. Worse, they created an even weaker variant (A5/2) for export purposes.

One of the notable aspects of A5/1 is its simplicity. It was clearly designed with efficiency in mind, particularly efficiency in hardware. As with many hardware-optimized ciphers, it consists of several interlinked Linear Feedback Shift Registers (LFSRs). While it may be possible to build a secure cipher using this technique, LFSR-based ciphers seem to be overrepresented in the ranks of broken ciphers. Unfortunately A5/1 is no exception.

For a number of years there were no public attacks against A5/1, but that's mostly because nobody knew how it worked. The cipher design was a well-guarded secret. This changed in the mid-1990s when the full cipher was published after having being reverse-engineered from a GSM phone.

By 1997 the first attacks had been published in the research literature, and over the next decade these attacks were gradually refined. By 2006 the weakened A5/2 cipher was essentially broken, and academic attacks on A5/1 had reached a level of practicality that put them within reach of even private groups. Finally, in 2009, security researchers published and began distributing via BitTorrent a series of pre-computed tables that allow for the complete decryption of a recorded, A5/1 encrypted phone call in just a few minutes.

To a motivated adversary, calls encrypted with A5/1 are essentially cleartext.

So why has A5/1 stuck with us? There are many answers to that question. Standards tend to be "sticky" once they achieve widespread adoption. Unless there's a viable and cost-effective plan for incrementally renewing security, it can be almost impossible to convince to move the herd. Complicating matters, many GSM phones implement A5/1 in hardware. And while software can be updated, hardware is forever.

Thus, with an installed base of millions of GSM phones (and tower equipment) throughout the world, it's unlikely that the primitive will be replaced anytime soon. Instead, we'll gradually drift to to a newer technology, such as the UMTS "3G" standard which employs a new, proprietary block cipher called Kasumi, or A5/3. That is, assuming Kasumi makes it that long.

Misusing Primitives

Sometimes the problem isn't with the primitive per se, but rather in the way it's used. There are many, many examples of this. A classic example is the Wired Equivalent Privacy (WEP) protocol that used to be the standard for securing WiFi access points.

WEP was built around Ron Rivest's RC4 stream cipher, developed in the late 1980s. These days security professionals are deeply suspicious of RC4, mostly because it's non-standard and because there are some theoretical attacks against it. Still, as of today none of these attacks are quite practical... as long as RC4 is used correctly.

RC4 is a stream cipher, which means that, once you set it up with a key it deterministically spits out a stream of pseudo-random bits. To encrypt you use the exclusive-OR (XOR) function to merge those bits (called a "keystream") with your message. For reasons that we'll discuss later, it's very important the encryptor should never use the same keystream to encrypt two different messages. To avoid this problem, WEP uses a slightly different RC4 key for each data packet it sends.  These keys are derived by simply concatenating a fixed 40-bit or 104-bit shared key with the packet's sequence number, also known as an "Initialization Vector". Using || to indicate concatenation, the keys look like this:

Packet 1 RC4 encryption key = 00001 || {WEP secret key}
Packet 2 RC4 encryption key = 00002 || {WEP secret key}

At the time WEP debuted nobody had seriously considered the implications of setting RC4 up this way. However, in the late 1990s a trio of researchers named Fluhrer, Mantin and Shamir took a closer look. Their result? It turns out RC4 is devastatingly insecure in this configuration. Essentially what they showed is that when RC4 is used to encrypt different messages with keys that are mostly the same (but only differ in, let's say, the last few bytes), the resulting outputs are statistically correlated. Worse --- much worse --- given a large enough collection such ciphertexts it's actually possible to recover the shared portion of the key. In the case of WEP that means recovering the shared WEP secret key used by all users, essentially breaking WEP altogether.

Many of the vulnerabilities in WEP have been addressed in the WPA2 spec, which replaces RC4 with the much more standard AES cipher, and adds some other important features such as message authentication. For those legacy devices whose hardware couldn't support AES, the WPA designers also proposed a band-aid fix called TKIP. TKIP's an interesting case study in design compromises --- but that's a story for another post.

The Great Hash Function Adventure

Sometimes you do everything right --- and you still wind up in trouble. Consider the sad story of the MD5 hash function. Once upon a time MD5 was king of the world --- it was the standard used for digital signatures, for hashing software packages, and anything else you can think of. Not only was it designed by professionals, it was rigorously tested and validated by no less than the National Institutes of Standards and Technology.

Today MD5 is like zucchini. If you have it, you really want to get rid of it.

If you're not familiar with hash functions, you probably don't realize how critical and pervasive they are in cryptographic systems. Briefly, a hash function takes a large file and reduces it to a small, fixed-size "digest" value. If you routinely download software from open source repositories you might be accustomed to checking a 20-byte SHA1 checksum to make sure you're not getting the wrong file.

Hash functions are also used to make digital signatures more efficient --- by hashing down large files so we can sign the resulting small digest, rather than the original file itself. This works because cryptographic hash functions are assumed to be "collision resistant"; namely, it's hard to find two files A and B such that A and B both hash to the same digest. In principle if I give you a signature on the hash of A, it should be just as good as signature on A itself.

But consider what would happen if that guarantee broke down. Say Alice wants to obtain an SSL certificate on her domain name "www.alice.com". A digital certificate is nothing more than a file containing Alice's identity and key, all signed by some trusted authority like Verisign. Imagine now that Alice is evil: she's come up with one file that says she owns "www.alice.com", and a second file that says she owns "www.bankofamerica.com".

Moreover, because Alice has found a collision in the hash function, both of these files hash to the same value.  And Verisign doesn't sign the files directly, it signs the hashes.

Thus if Alice can get Verisign to sign the hash of that harmless "www.alice.com" file, then she's got a signature that also works for her malicious file as well. This is very bad, at least if you're a Bank of America customer.

Thus, when in 2004 a team of researchers announced that they'd discovered collisions in the MD5 hash function, security professionals took it very seriously. Fortunately, the attack described above requires Alice to come up with not just any collision (i.e., two arbitrary files that hash to the same thing), but what we call a "meaningful" collision --- those two values must each contain valid certificate data. Surely it wouldn't be so easy to find such meaningful collisions!

Unfortunately, it turns out that the construction of the MD5 hash function has a particular property that makes it easy to "extend" one collision into many different collisions, some of which can be meaningful. Within a year, researchers had shown how to construct colliding software files, colliding PDF files, and most importantly, colliding digital certificates. The latter attack was particularly significant, due to the attack described above.  A malicious individual could request a signed SSL certificate on "www.my-pony-blog.com" and instead come away with a valid certificate for "www.microsoft.com".  Or worse, a certificate designating them as a Certificate Authority, with the right to create new certificates on any domain!


 The "supercomputer" used to find colliding digital certificates
(yes, it's a bunch of Sony PS3s).  From the link above.

In the short term, the solution is to remove MD5 wherever it's used. That would almost be enough, but it seems that the next candidate for collision finding is the widely-used SHA1 hash function. Fortunately NIST, the US organization responsible for certifying cryptography, is literally two steps ahead of this one.  They've already begun a competition to choose a new hash function that will someday be designated SHA3.

So what if you're using all the right primitives?  In the next sections we'll discuss another way that security systems can go wrong: bad cryptographic protocols.

Introduction

If you used an electronic system today, you almost certainly depended on a cryptographic protocol to keep you out of harm's way. Your access might have been a credit card transaction, a purchase at Amazon.com, or even just a mobile phone call. Or it could have been any one of a hundred other types of digital activity that depends in some way on cryptography.

Cryptography (literally, "hidden writing") isn't a new science. It dates back almost as far as the written word. Through most of that history it wasn't exactly a science at all; the history of designing and breaking ciphers could be best described as a rare and underappreciated art form.

All of this changed somewhere between World War II and the disco era. With the advent of the digital computer and the communications network, cryptography was forced, awkwardly, into becoming a science.  But as we'll see in these posts, successfully using cryptography is still very much an art.

Getting the message

Historically, if you wanted to send a confidential message -- military orders during a war, let's say -- your best bet was to keep the message out of enemy hands. If you couldn't do that, you only hope was that your enemy wouldn't be able to make sense of what he had.

To make this work in practice, both you and your intended receipient needed to share some kind of understanding (or "key") that would allow your recipient to decode the intended meaning. Furthermore, you needed to be reasonably sure that your enemy wouldn't share that understanding. More has been written about this problem than is worth talking about here. Suffice it to say that it wasn't the easiest problem, but it was a tractable one.

Fast forward to today. Instead of a soldier passing through enemy territory we have messages traveling over wired and wireless networks, in full view of eavesdroppers with sophisticated equipment. To make matters more complicated, we want to communicate with parties with whom we don't share a prior relationship; for example, a website we've never visited before.  Even more interestingly, we may not fully trust the recipient of the message.

And as the icing on the cake, we're doing this on the most insecure device ever invented: the modern general purpose computer. Even with the best modern cryptographic algorithms, building secure systems is a problem we've barely begun to solve. If you don't believe this, all you need to do is take a glance through the list of successful breaks we've seen over the past few years!

So what's this blog about?

This blog isn't about the science of cryptography, or the strides that cryptographers have made over the past decades. We'll mention those, since they're important to the matter at hand.  But mostly we're going to talk about failure. The kind of failures that occur on a routine basis when people actually try to implement modern cryptosystems.

Those failures are many and varied. To understand them, we'll discuss some of the basic concepts of encryption and authentication. We'll move on to discuss software vulnerabilities and physical security, including how side-channel attacks affect cryptographic implementations. In every case we'll give real-world examples of systems that succeeded or failed because of a specific issue.

The content of this blog is an attempt to compress a semester-long course that I give from time to time, into something that stands on its own and doesn't require textbook lengths.  It isn't intended to be a primary source on cryptographic algorithms --- for that there are many other references, which I'll provide as time goes by.  Instead, the purpose of this blog is to provide, via examples, an understanding of all the ways that cryptographic systems can fail.  At very least you'll develop an appreciation of how hard it is to develop these systems yourself.

And with that brief introduction...  away we go!