Thursday, April 24, 2014

Attack of the Week: Triple Handshakes (3Shake)

3Shake logo designed by @Raed667
The other day Apple released a major security update that fixes a number of terrifying things that can happen to your OS/X and iOS devices. You should install it. Not only does this fix a possible remote code execution vulnerability in the JPEG parser (!), it also patches a TLS/SSL protocol bug known as the "Triple Handshake" vulnerability. And this is great timing, since Triple Handshakes are something I've been meaning (and failing) to write about for over a month now.

But before we get there: a few points of order.

First, if Heartbleed taught us one thing, it's that when it comes to TLS vulnerabilities, branding is key. Henceforth, and with apologies to Bhargavan, Delignat-Lavaud, Pironti,  Fournet and Strub (who actually discovered the attack*), for the rest of this post I will be referring to the vulnerability simply as "3Shake". I've also taken the liberty of commissioning a logo. I hope you like it.

On a more serious note, 3Shake is not Heartbleed. That's both good and bad. It's good because Heartbleed was nasty and 3Shake really isn't anywhere near as dangerous. It's bad since, awful as it was, Heartbleed was only an implementation vulnerability -- and one in a single TLS library to boot. 3Shake represents a novel and fundamental bug in the TLS protocol.

The final thing you should know about 3Shake is that, according to the cryptographic literature, it shouldn't exist.

You see, in the last few years there have been at least three four major crypto papers purporting to prove the TLS protocol secure. The existence of 3Shake doesn't make those results wrong. It may, however, indicate that cryptographers need to think a bit more about what 'secure' and 'TLS' actually mean. For me, that's the most fascinating implication of this new attack.

I'll proceed with the usual 'fun' question-and-answer format I save for this sort of thing.

What is TLS and why should you care?

Since you're reading this blog, you probably already know something about TLS. You might even realize how much of our infrastructure is protected by this crazy protocol.

In case you don't: TLS is a secure transport protocol that's designed to establish communications between two parties, who we'll refer to as the Client and the Server. The protocol consists of two sub-protocols called the handshake protocol and the record protocol. The handshake is intended to authenticate the two communicating parties and establish shared encryption keys between them. The record protocol uses those keys to exchange data securely.

For the purposes of this blog post, we're going to focus primarily on the handshake protocol, which has (at least) two major variants: the RSA handshake and the Diffie-Hellman handshake (ECDHE/DHE). These are illustrated below.

Simplified description of the TLS handshakes. Top: RSA handshake. Bottom: Diffie-Hellman handshake.
As much as I love TLS, the protocol is a hot mess. For one thing, it inherits a lot of awful cryptography from its ancient predecessors (SSLv1-3). For another, it's only really beginning to be subjected to rigorous, formal analysis.

All this means we're just now starting to uncover some of the bugs that have been present in the protocol since it was first designed. And we're likely to discover more! That's partly because this analysis is at a very early stage. It's also partly because, from an analysts' point of view, we're still trying to figure out exactly what the TLS handshake is supposed to do.

Well, what is the TLS handshake supposed to do?

Up until this result, we thought we had a reasonable understanding of the purpose of the TLS handshake. It was intended to authenticate one or both sides of the connection, then establish a shared cryptographic secret (called the Master Secret) that could be used to derive cryptographic keys for encrypting application data.

The first problem with this understanding is that it's a bit too simple. There isn't just one TLS handshake, there are several variants of it. Worse, multiple different handshake types can be used within a single connection.

The standard handshake flow is illustrated -- without crypto -- in the diagram below. In virtually every TLS connection, the server authenticates to the client by sending a public key embedded in a certificate. The client, for its part, can optionally authenticate itself by sending a corresponding certificate and proving it has the signing key. However this client authentication is by no means common. Many TLS connections are authenticated only in one direction.

Common TLS handshakes. Left: only server authenticates.
Right: client and server both authenticate with certificates.
TLS also supports a "renegotiation" handshake that can switch an open connection from one mode to the other. This is typically used to change a connection that was authenticated only in one direction (Server->Client) into a connection that's authenticated in both directions. The server usually initiates renegotiation when the client has e.g., asked for a protected resource.

Renegotiating a session. A new handshake causes the existing connection to be mutually authenticated.
Renegotiation has had problems before. Back in 2009, Ray and Dispensa showed that a man-in-the-middle attacker could actually establish a (non-authenticated) connection with some server; inject some data; and when the server asks for authentication, the attacker could then "splice" on a real connection with an authorized client by simply forwarding the new handshake messages to the legitimate client. From the server's point of view, both communications would seem to be coming from the same (now authenticated) person:

Ray/Dispensa attack from 2009. The attacker first establishes an unauthenticated connection and injects some traffic ("drop table *"). When the server initiates a renegotiation for client authentication, the attacker forwards the new handshake messages to an honest client Alice who then sends real traffic. Since the handshakes are not bound together, Bob views this as a single connection to one party.
To fix this, a "secure renegotiation" band-aid to TLS was proposed. The rough idea of this extension was to 'bind' the renegotiation handshake to the previous handshake, by having the client present the "Finished" message of the previous handshake. Since the Finished value is (essentially) a hash of the Master Secret and the (hash of) the previous handshake messages, this allows the client to prove that it -- not an attacker -- truly negotiated the previous connection.

All of this brings us back to the question of what the TLS handshake is supposed to do.

You see, the renegotiation band-aid now adds some pretty interesting new requirements to the TLS handshake. For one thing, the security of this extension depends on the idea that (1) no two distinct handshakes will happen to use the same Master Secret, and (2) that no two handshakes will have the same handshake messages, ergo (3) no two handshakes will have the same Finished message.

Intuitively, this seemed like a pretty safe thing to assume -- and indeed, many other systems that do 'channel binding' on TLS connections also make this assumption. The 3Shake attack shows that this is not safe to assume at all.

So what's the problem here?

It turns out that TLS does a pretty good job of establishing keys with people you've authenticated. Unfortunately there's a caveat. It doesn't truly guarantee the established key will be unique to your connection. This is a pretty big violation of the assumptions that underlie the "secure renegotiation" fix described above.

For example: imagine that Alice is (knowingly) establishing a TLS connection to a server Mallory. It turns out that Mallory can simultaneously -- and unknown to Alice -- establish a different connection to a second server Bob. Moreover, if Mallory is clever, she can force both connections to use the same "Master Secret" (MS).

Mallory creates two connections that use the same Master Secret.
The first observation of the 3Shake attack is that this trick can be played if Alice supports either of the or RSA and DHE handshakes -- or both (it does not seem to work on ECDHE). Here's the RSA version:**

RSA protocol flow from the triple handshake attack (source). The attacker is in the middle, while the client and server are on the left/right respectively. MS is computed as a function of (pms, cr, sr) which are identical in both handshakes. 
So already we have a flaw in the logic undergirding secure renegotiation. The Master Secret (MS) values are not necessarily distinct between different handshakes.

Fortunately, the above attack does not let us resurrect the Ray/Dispensa injection attack. While the attacker has tricked the client into using a specific MS value, the handshake Finished messages -- which the client will attach to the renegotiation handshake -- will not be the same in both handshakes. That's because (among other things) the certificates sent on each connection were very different, hence the handshake hashes are not identical. In theory we're safe.

But here is where TLS gets awesome.

You see, there is yet another handshake I haven't told you about. It's called the "session resumption handshake", and it allows two parties who've previously established a master secret (and still remember it) to resume their session with new encryption keys. The advantage of resumption is that it uses no public-key cryptography or certificates at all, which is supposed to make it faster.



It turns out that if an attacker knows the previous MS and has caused it to be the same on both sides, it can now wait until the client initiates a session resumption. Then it can replay messages between the client and server in order to update both connections with new keys:

An attacker replays the session resumption handshake to ensure the same key on both sides. Note that the handshake messages are identical in both connections. (authors of source)
Which brings us to the neat thing about this handshake. Not only is the MS the same on both connections, but both connections now see exactly the same (resumption) handshake messages. Hence the hash of these handshakes will be identical, which means in turn that their "Finished" message will be identical.

By combining all of these tricks, a clever attacker can pull off the following -- and absolutely insane -- "triple handshake" injection attack:

Triple handshake attack. The attacker mediates two handshakes that give MS on both sides, but two different handshake hashes. The resumption handshake leaves the same MS and an identical handshake hash on both sides. This means that the Finished message from the resumption handshake will be the same for the connections on either side of the attacker. Now he can hook up the two without anyone noticing that he previously injected traffic.
In the above scenario, an attacker first runs a (standard) handshake to force both sides of the connection to use the same MS. It then causes both sides to perform session resumption, which results in both sides using the same MS and having the same handshake hash and Finished messages on both sides. When the server initiates renegotiation, the attacker can forward the third (renegotiation) handshake on to the legitimate client as in the Ray/Dispensa attack -- secure in the knowledge that both client and server will expect the same Finished token.

And that's the ballgame.

What's the fix?

There are several, and you can read about them here.

One proposed fix is to change the derivation of the Master Secret such that it includes the handshake hash. This should wipe out most of the attacks above. Another fix is to bind the "session resumption" handshake to the original handshake that led to it.

Wait, why should I care about injection attacks?

You probably don't, unless you happen to be one of the critical applications that relies on the client authentication and renegotiation features of TLS. In that case, like most applications, you probably assumed that a TLS connection opened with a remote user was actually from that user the whole time, and not from two different users.

If you -- like most applications -- made that assumption, you might also forget to treat the early part of the connection (prior to client authentication) as a completely untrusted bunch of crap. And then you'd be in a world of hurt.

But don't take my word for it. There's video! (See here for the source, background and details):

video


What does this have to do with the provable security of TLS?

Of all the questions 3Shake raises, this one is the most interesting. As I mentioned earlier, there have been several recent works that purport to prove things about the security of TLS. They're all quite good, so don't take any of this as criticism.

However, they (with one exception, the miTLS project) didn't find this attack. Why is that?

The first reason is simple: many of these works analyze only the basic TLS handshake, or they omit at least one of the possible handshakes (e.g., resumption). This means they don't catch the subtle interactions between the resumption handshake, the renegotiation handshake, and extensions -- all of which are the exact ingredients that make most TLS attacks possible.

The second problem is that we don't quite know what standard we're holding TLS to. For example, the common definition of security for TLS is called "Authenticated and Confidential Channel Establishment" (ACCE). Roughly speaking this ensures that two parties can establish a channel and that nobody will be able to determine what data is being sent over said channel.

The problem with ACCE is that it's a definition that was developed specifically so that TLS could satisfy it. As a result, it's necessarily weak. For example, ACCE does not actually require that each handshake produces a unique Master Secret -- one of the flaws that enables this attack -- because such a definition was not possible to achieve with the existing TLS protocol. In general this is what happens when you design a protocol first and prove things about it later.

What's the future for TLS? Can't we throw the whole thing out and start over again?

Sure, go ahead and make TLS Rev 2. It can strip out all of this nonsense and start fresh.

But before you get cocky, remember -- all these crazy features in TLS were put there for a reason. Someone wanted and demanded them. And sadly, this is the difference between a successful, widely-used protocol and your protocol.

Your new replacement for TLS might be simple and wonderful today, but that's only because nobody uses it. Get it out into the wild and before long it too will be every bit as crazy as TLS.

Notes:

* An earlier version of this post incorrectly identified the researchers who discovered the attack.

** The Diffie-Hellman (DHE) version is somewhat more clever. It relies on the attacker manipulating the D-H parameters such that they will force the client to use a particular key. Since DHE parameters sent down from the server are usually 'trusted' by TLS implementations, this trick is relatively easy to pull off.

Tuesday, April 8, 2014

Attack of the week: OpenSSL Heartbleed

Ouch. (Logo from heartbleed.com)
I start every lecture in my security class by asking the students to give us any interesting security or crypto news they've seen recently, preferably with a focus on vulnerabilities. The start of my last class was pretty lame, which meant either (1) we'd finally learned how to make our crypto software secure, or (2) something terrible was about to happen.

I'll let you take a guess which one it was.

The news today is all about the Heartbleed bug in OpenSSL (nb: a truly awful name that makes me glad there are no other vital organs referenced in the TLS code). Unlike all the fancy crypto-related attacks we've seen recently -- BEAST, CRIME, Lucky13, etc. -- Heartbleed doesn't require any interesting crypto. In fact it's the result of a relatively mundane coding error. And predictably, this makes it more devastating than all of those fancy attacks put together.

I'm going to talk about Heartbleed using the 'fun' question/answer format I save for this kind of post.

What's Heartbleed and why should I care about OpenSSL?

In case you haven't read the Heartbleed website, go do that. Here I'll just give a quick overview.

Heartbleed is a surprisingly small bug in a piece of logic that relates to OpenSSL's implementation of the TLS 'heartbeat' mechanism. The bug is present in OpenSSL versions 1.0.1 through 1.0.1f (and not in other versions). Sadly, these versions have seen a great deal of adoption lately, because security professionals have been urging developers to implement more recent versions of TLS (1.1 and 1.2). Deployment of TLS 1.2 currently stands at about 30% of the SSL Pulse data set. Many of those servers are likely vulnerable.

The problem is fairly simple: there's a tiny vulnerability -- a simple missing bounds check -- in the code that handles TLS 'heartbeat' messages. By abusing this mechanism, an attacker can request that a running TLS server hand over a relatively large slice (up to 64KB) of its private memory space. Since this is the same memory space where OpenSSL also stores the server's private key material, an attacker can potentially obtain (a) long-term server private keys, (b) TLS session keys, (c) confidential data like passwords, (d) session ticket keys.

Alleged Yahoo user credentials visible due to Heartbleed (source: Mark Loman).
Any of the above may allow an attacker to decrypt ongoing TLS sessions or steal useful information. However item (a) above is by far the worst, since an attacker who obtains the server's main private keys can potentially decrypt past sessions (if made using the non-PFS RSA handshake) or impersonate the server going forward. Worst of all, the exploit leaves no trace.

You should care about this because -- whether you realize it or not -- a hell of a lot of the security infrastructure you rely on is dependent in some way on OpenSSL. This includes many of the websites that store your personal information. And for better or for worse, industry's reliance on OpenSSL is only increasing.

What's the remedy?

Unfortunately it's pretty nasty.

You can test if a given server is vulnerable using one of these tools (note: use at your own risk). Having identified a problem, the first step is to patch OpenSSL. Fortunately this is relatively easy. The 1.0.1g version is not vulnerable, and Debian has a patch. You can also recompile OpenSSL with the -DOPENSSL_NO_HEARTBEATS option.

Sadly, this is only the beginning. Since there's no way to tell whether a server has been exploited (and exploit code is now in the wild) you need to assume that it is. This means the safe move is to revoke your certificate and get a new one. Have fun.

What's the bug?

The TLS Heartbeat mechanism is designed to keep connections alive even when no data is being transmitted. Heartbeat messages sent by one peer contain random data and a payload length. The other peer is suppose to respond with a mirror of exactly the same data.

   When a HeartbeatRequest message is received and sending a
   HeartbeatResponse is not prohibited as described elsewhere in this
   document, the receiver MUST send a corresponding HeartbeatResponse
   message carrying an exact copy of the payload of the received
   HeartbeatRequest.

The data structure of the request looks like this. Note the two-byte payload length:

   struct {
      HeartbeatMessageType type;
      uint16 payload_length;
      opaque payload[HeartbeatMessage.payload_length];
      opaque padding[padding_length]; 
   } HeartbeatMessage;

Which brings us to the bug. The original bug was introduced in this Git commit. The code appears in different files (for DTLS and TLS). Here we'll look at the file t1_lib.c:

2412         /* Read type and payload length first */
2413         hbtype = *p++;
2414         n2s(p, payload);
2415         pl = p;
2417         if (s->msg_callback)
2418                 s->msg_callback(0, s->version, TLS1_RT_HEARTBEAT,
2419                         &s->s3->rrec.data[0], s->s3->rrec.length,
2420                         s, s->msg_callback_arg);
2422         if (hbtype == TLS1_HB_REQUEST)
2423                 {
2424                 unsigned char *buffer, *bp;
2425                 int r;
2427                 /* Allocate memory for the response, size is 1 bytes
2428                  * message type, plus 2 bytes payload length, plus
2429                  * payload, plus padding
2430                  */
2431                 buffer = OPENSSL_malloc(1 + 2 + payload + padding);
2432                 bp = buffer;
2433                 
2434                 /* Enter response type, length and copy payload */
2435                 *bp++ = TLS1_HB_RESPONSE;
2436                 s2n(payload, bp);
2437                 memcpy(bp, pl, payload);
2438                 
2439                 r = ssl3_write_bytes(s, TLS1_RT_HEARTBEAT, buffer, 3 + payload + padding);

As you can see, the incoming (adversarially-generated) data contains a payload length ("payload") which is trusted without bounds checks. OpenSSL then allocates a buffer for its response, and copies "payload" data bytes from the pointer "pl" into it. Unfortunately, there's no check to make sure that there are actually "payload" bytes in data, or that this is in bounds. Hence the attacker gets a slice of data from main memory -- one that's up to 64KB in length.

The fix is equally simple. Just add a bounds check:

+    /* Read type and payload length first */
+    if (1 + 2 + 16 > s->s3->rrec.length)
+        return 0; /* silently discard */
+    hbtype = *p++;
+    n2s(p, payload);
+    if (1 + 2 + payload + 16 > s->s3->rrec.length)
+        return 0; /* silently discard per RFC 6520 sec. 4 */
+    pl = p;
+

Wasn't that dull?

Come on guys, next time please: use a cache timing weakness in AES-GCM encryption. This whole 'effectively breaking OpenSSL by finding simple C coding bugs' isn't even challenging.

Should we rail against the OpenSSL developers for this?

Don't even think about it. The OpenSSL team, which is surprisingly small, has been given the task of maintaining the world's most popular TLS library. It's a hard job with essentially no pay. It involves taking other folks' code (as in the case of Heartbeat) and doing a best-possible job of reviewing it. Then you hope others will notice it and disclose it responsibly before disasters happen.

The OpenSSL developers have a pretty amazing record considering the amount of use this library gets and the quantity of legacy cruft and the number of platforms (over eighty!) they have to support. Maybe in the midst of patching their servers, some of the big companies that use OpenSSL will think of tossing them some real no-strings-attached funding so they can keep doing their job.

Wednesday, March 19, 2014

How do you know if an RNG is working?

No matter how much cryptographers accomplish, we're always
building on a questionable foundation. (illustration: Marc S. Rousseau)
Last week, Edward Snowden spoke to a packed crowd at SXSW about the many problems (and limited solutions) facing those of us who want to keep our communications private. Snowden said a number of things -- including a shout out to Moxie's company Whisper Systems, who certainly deserve it. But instead of talking about that, I wanted to focus on (in my opinion) one of Snowden's most important quotes:
We need all those brilliant Belgian cryptographers to go "alright we know that these encryption algorithms we are using today work, typically it is the random number generators that are attacked as opposed to the encryption algorithms themselves. How can we make them [secure], how can we test them?"
Now it's possible I'm a little biased, but it seems to me this cuts to the core of our problems with building secure systems in an increasingly hostile world. Namely: most encryption relies on some source of "random" numbers, either to generate keys or (particularly in the case of public key encryption) to provide semantic security for our ciphertexts.

What this means is that an attacker who can predict the output of your RNG -- perhaps by taking advantage of a bug, or even compromising it at a design level -- can often completely decrypt your communications. The Debian project learned this firsthand, as have many others. This certainly hasn't escaped NSA's notice, if the allegations regarding its Dual EC random number generator are true.

All of this bring us back to Snowden's quote above, and the question he throws open for us. How do you know that an RNG is working? What kind of tests can we run on our code to avoid flaws ranging from the idiotic to the highly malicious? Unfortunately this question does not have an easy answer. In the rest of this post I'm going to try to explain why.

Background: Random and Pseudorandom Number Generation

I've written quite a bit about random number generation on this blog, but before we go forward it's worth summarizing a few basic facts about random number generation.

First off, the 'random' numbers we use in most deployed cryptographic systems actually come from two different systems:
  1. A 'true' random number generator (or entropy generator) that collects entropy from the physical world. This can include entropy collected from low-level physical effects like thermal noise and shot noise, or it can include goofy stuff like mouse movements and hard disk seek times.
  2. An algorithmic 'pseudorandom number generator' (PRNG) that typically processes the output of (1) to both stretch the output to provide more bits and, in some cases, provide additional security protections in case the output of (1) proves to be biased.
It's important to note that pseudorandom number generators aren't "random number generators" at all. These generators typically use cryptographic algorithms (e.g., block ciphers or hash functions) to process a seed value from the RNG into many apparently random looking and unpredictable bytes.

In most cases, it's quite rare for your application to ever see the raw output of a true random number generator.* Even the low-level entropy collector within Linux's RNG uses cryptographic constructs like hash functions in order to 'mix' the output of various entropy sources. To produce the bits produced in /dev/random or /dev/urandom Linux then seeds a PRNG like Yarrow or Fortuna.**

Another similar pattern occurs inside of the Intel "secure key" random number generator included in Intel Ivy Bridge processors. When you buy one of these processors, you're getting (free!) a hardware '1-shot' circuit that collects high-entropy electronic noise, which is then measured and processed into useful RNG output. The design looks like this:

Hardware random number generator used on Intel Ivy Bridge processors. Left: the '1-shot' circuit used to collect physical entropy. Right: the data flow from generator to output, including health checks and
PRNG computation. (source).
Once again, with Intel's design you (i.e., the application developer) don't get access to this raw randomness. It's first used to seed a PRNG based on AES (CTR-DRBG from NIST SP800-90A). What you actually get as an application developer is the processed output of that algorithm.

In practice this typical design some implications. On the positive side, the presence of a PRNG means that the underlying RNG circuit can get pretty borked (e.g., biased) without the results being detectable by your application. On the negative side, the underlying RNG circuit can get pretty borked without the results being detectable in your application.

In other words, with only a few ugly glitches -- things that can happen in real life -- you can easily get a broken random number generator that nobody notices until it's way too late. And that's without deliberate tampering, which makes things way, way worse.

Which brings us back to our fundamental question: how do systems know that their RNG is working. This turns out to be a question without a perfect answer.

Statistical Tests

If you look at the literature on random number generators, you'll find a lot of references to statistical randomness testing suites like Diehard or NIST's SP 800-22. The gist of these systems is that they look a the output of an RNG and run tests to determine whether the output is, from a statistical perspective, "good enough" for government work (very literally, in the case of the NIST suite.)

The nature of these tests varies. Some look at simple factors like bias (the number of 1s and 0s) while others look for more sophisticated features such as the distribution of numbers when mapped into 3-D space.

Now I don't want to knock these tests. They're a perfectly valid way to detect serious flaws in a (true) RNG -- I can attest to this, since I've built one that failed the tests miserably -- but they probably won't detect flaws in your system. That's because like I said above, most deployed systems include a combination of RNG and PRNG, or even RNG plus "conditioning" via cryptographic hash functions or ciphers. The nature of these cryptographic, algorithmic processes is such that virtually every processed output will pass statistical tests with flying colors -- even if the PRNG is initialized with 'garbage' input.

This means, unfortunately, that it can be very hard to use statistical tests to detect a broken RNG unless you properly test it only at the low level. And even there you won't rule out intentional backdoors -- as I'll discuss in a moment.

Known Answer Tests (KATs)

Assuming that you've tested your true RNG properly and it's passing all tests, it's still important to test your PRNG. One approach to doing this is to use Known Answer Tests (KATs) that are essentially test vectors. These contain some input seed material as well as a set of output bytes that should be the algorithmic result of running the PRNG on that seed.

Since PRNGs are purely algorithmic, the theory here is that you can test them like algorithms. While this approach is valid, it raises two potential issues (both of which I've seen in practice).

First, you can only test your PRNG on so many points. Thus it's quite possible that your PRNG will succeed on one particular test vector (i.e., it'll output just so many valid bytes) but go completely off the reservation on some other input. This is unlikely, but not impossible in normal conditions. It's very possible if someone is trying to build a malicious backdoor into your PRNG implementation.

Second, the process of instrumenting your PRNG implementation for testing can actually introduce vulnerabilities in your deployed system! Think about this for a second. Normal PRNGs take in real random seeds from your RNG. The last thing you'd ever want to do is run your PRNG on some predictable seed -- if you did, everyone would be able to predict the PRNGs outputs. Yet adding a test harness your system means building in logic to re-seed your RNG to something predictable!

This is like adding an ejection seat to your car. Might make you safer -- unless it goes off while you're driving to work.

A quick glance through e.g., the OpenSSL code shows that indeed, exactly this sort of code exists and ships in some versions of the library. Of course, the experienced developers will note that surely such features could be surrounded by pre-processor directives (or the equivalent in your language of choice) ensuring that they'll never be activated in production code. Sadly, at least in the case of FIPS, this is not possible -- for reasons I'll explain next.

Runtime Health Checks

Another approach to testing RNGs is to test them while the system is running. This isn't intended to rule out design-level flaws (as the above statistical and KAT tests are) but it is intended to catch situations where the RNG becomes broken during normal operation. This can occur for a variety of reasons, e.g., manufacturing defects, system damage, and even exposure to outside radiation.

Health checks can take different forms. FIPS 140, for example, mandates that all approved RNGs be tested at startup time using KATs. (This is why you can't make your test harness conditional on compilation flags -- it must ship in your production code!) They subsequently mandate a runtime health check that verifies the generator has not become 'stuck', i.e., is spitting out the same bytes over and over again.

While I'm sure this last test may have saved someone, somewhere, it seems totally inappropriate and useless when applied to the output of an RNG/PRNG pair, which is how NIST recommends it be used. This is because even the most broken algorithmic PRNGs will almost never spit out duplicate values -- even if the underlying RNG fails completely.

The upshot of this decision is that NIST (FIPS) recommend a check that will almost never succeed in catching anything useful from a PRNG, but does introduce a whole bunch of extra logic that can suffer from flaws and/or malicious circumvention. I'm sure the good folks at NIST realize this, but they recommend it anyway -- after all, what else are they going to do?

Tampering

Which brings us to the $10 million question. What happens if an attacker is deliberately tampering with our RNG/PRNG in order to make it fail? Note that this is not an academic question. We have excellent reason to believe it's happened in some real systems.

Just for fun, let's go back to the Intel Ivy Bridge RNG described above. We'll take a look specifically at the PRNG portion of the design, which uses the NIST CTR-DRBG random number generator with AES:

Portion of the Intel Ivy Bridge design, with a few annotations added by yours truly. (original source
The CTR-DRBG design relies on two features. First, an AES key is selected at random along with some input seed. This pair goes into the AES cipher, where it is processed to derive a new key and data. The result should be unpredictable to most attackers.

But if you were able to change the way keys were updated (in the key_in_mux hilighted) so that instead of updating the key and/or using an unpredictable one, it chose a fixed key known to the attacker, you would now have a very powerful backdoor. Specifically, the output would still look statistically perfectly random. But an attacker who knows this key could simply decrypt one block of RNG output to obtain all future and past outputs of the generator until the next time it was reseeded.

Note that I am not saying the Intel system has a backdoor in it -- far from it. I'm only considering how easily it might be made to have one if you were an attacker with control of Intel's fabrication plants (or their microcode updates). And this is hardly Intel's fault. It's just the nature of this particular RNG design. Others could be just as vulnerable.

Actually using this knowledge to attack applications would be more complex, since many system-level RNGs (including the Linux Kernel RNG) combine the output of the RNG with other system entropy (through XOR, unfortunately, not hashing). But Intel has pushed hard to see their RNG output used directly, and there exist plugins for OpenSSL that allow you to use it similarly. If you used such a method, these hypothetical flaws could easily make their way all the way into your cryptography.

Designing against these issues

Unfortunately, so far all I've done is call out the challenges with building trustworthy RNGs. And there's a reason for this: the challenges are easy to identify, while the solutions themselves are hard. And unfortunately at this time, they're quite manual.

Building secure RNG/PRNGs still requires a combination of design expertise, careful low-level (true) RNG testing -- using expert design and statistical tests -- and the use of certified algorithms with proper tests. All of the techniques above contribute to building a secure RNG, but none of them are quite sufficient.

Solving this problem, at least in software, so we can ensure that code is correct and does not contain hidden 'easter eggs', represents one of the more significant research challenges facing those of us who depend on secure cryptographic primitives. I do hope some enterprising graduate students will give these issues the attention they deserve.

Notes:

* Though there are some exceptions. See, for example, this FIPS certified smart card that included a bad RNG which was used to generate cryptographic secrets. In general FIPS disallows this except for a very small number of approved RNGs. Perhaps this was one.

** The original version of this post claimed that /dev/random seeds /dev/urandom. This is actually a mistake -- both /dev/random and /dev/urandom use the same PRNG, but /dev/random simply keeps track of how much 'entropy' is in the pool and blocks when you have drawn too many bits. Thanks to Brendan Long and Thomas Ptacek for setting me straight.

Thursday, February 20, 2014

Cryptographic obfuscation and 'unhackable' software

I have a thing for over-the-top cryptography headlines -- mostly because I enjoy watching steam come out of researchers' ears when their work gets totally misrepresented. And although I've seen quite a few good ones, last week WIRED managed a doozy.

The headline in question, Cryptography Breakthrough Could Make Software Unhackable, managed to accomplish something that few cryptography headlines do. It sent its own protagonist, Amit Sahai, into the comments section to perform intellectual garbage pickup.

In contrast to the headline, which is quite bad, the article is actually pretty decent. Still, the discussion around it has definitely led to some confusion. As a result, many now think an amazing breakthrough has taken place -- one that will finally make software secure. They're probably right about the first part. They may be disappointed about the rest.

The truth, as usual, is complicated. There is, indeed, something very neat going on with the new obfuscation results. They're just not likely to make software 'unhackable' anytime soon. They might, however, radically expand what we can do with cryptography. Someday. When they're ready. And in ways we don't fully understand yet.

But before I go into all that, it's probably helpful to give some background.

Program obfuscation

The Wired article deals with the subject of 'program obfuscation', which is a term that software developers and cryptographers have long been interested in. The motivation here is pretty simple: find a way that we can give people programs they can run -- without letting them figure out how the programs work.

Note that the last part necessarily covers a lot of ground. In principle it includes aspects ranging from the nature of specific secret algorithms used -- which may be proprietary and thus worth money -- to secret information like passwords and cryptographic keys that might be hardcoded into the program.

For a simple example, consider the following routine:

// Check a password and print out top secret information if it's correct
//
SuperSecretPasswordProtectedStuff(string passwd) {
  if (password == "0xt438fh27266629zn28366492923aai3jnqobbyc4t!") {
    print("Congratulations. Here's some super secret private information: ....\n");
  } else {
    print("Wrong password, fool.\n");
  }
}

If you're like me, you probably wrote a program like this at some point in your life. You may have eventually realized how ineffective it would be -- against a smart attacker who could dump the program code. This is because most programs (and even compiled binaries) are pretty easy to read. An attacker could just look at this program to recover the secret password.

Program obfuscation is motivated by the idea that many useful programs would benefit if we could somehow 'stop' people from doing this, while still letting them possess and run the code on their own computers.

In real world software systems, 'obfuscation' usually refers to a collection of ad-hoc techniques that turn nice, sensible programs into a morass of GOTOs and spaghetti code. Sometimes important constants are chopped up and distributed around the code. Some portions of the code may even be encrypted -- though only temporarily, since decryption keys must ship with the program so it can actually be run. Malware authors and DRM folks love this kind of obfuscation.
A chunk of 'birken', one of the winning entries in the 2013 obfuscated C contest.
While these techniques are nice, they're not really 'unhackable' or even secure against reverse-engineering in the strong sense we'd like. Given enough time, brains and tools, you can get past most common software obfuscation techniques. If you have, say, Charlie Miller or Dion Blazakis on your side, you probably do it quickly. This isn't just because those guys are smart (though they are), it's because the techniques are not mathematically rigorous. Most practical obfuscation amounts to roadblocks -- designed to slow reverse-engineers down and make them give up.

So what does it mean to securely 'obfuscate' a program?

The poor quality of existing software obfuscation set cryptographers up with a neat problem. Specifically, they asked, could we define a strong definition of program obfuscation that would improve on, to put it politely, the crap people were actually using? Given such an obfuscator I could hand you my obfuscated program to run while provably protecting all partial information -- except for the legitimate inputs and outputs.

If this could be done, it would have an incredible number of applications. Stock traders could obfuscate their proprietary trading algorithms and send them to the cloud or to end-customers. The resulting programs would still work -- in the sense that they produced the right results -- but customers would never learn anything about 'how they worked'. The internals of the program would be secret sauce.

Unfortunately, when the first researchers started looking at this problem, they ran into a very serious problem: nobody had any idea what it meant to securely 'obfuscate' a program in the first place.

Do think about it for a second before you roll your eyes. What do you think it means to obfuscate a program? You're probably thinking something like 'people shouldn't learn stuff' about the program. But can you explain what stuff? And does 'stuff' depend on the program? What about programs where you can efficiently learn the whole program from sending it inputs and seeing the results? What does it mean to obfuscate those?

Clearly before progress could begin on solving the problem, cryptographers needed to sit down and figure out what they were trying to do. And indeed, several cryptographers immediately began to do exactly this.

Black-box cryptographic obfuscation

The first definitions cryptographers came up addressed a very powerful type of obfuscation called 'virtual black box obfuscation'. Roughly speaking, it starts from the following intuitive thought experiment.

Imagine you have a program P, as well as some code obfuscation technique you've developed. It's important that the obfuscation be efficient, meaning it doesn't slow the program down too much. To determine whether the obfuscation is successful, you can conduct the following experiment.
  1. Give Alice a copy of the obfuscated program code Obf(P), in a form that she can look at and run on her own computer.               
  2. Take the original (unobfuscated) program P, and seal it in a special computer located inside an armored 'black box'. Let a user Sam interact with the program by sending it inputs and receiving outputs. But don't let him access the code.
Roughly speaking, what we want from secure obfuscation is that Alice should learn 'no more' information after seeing the obfuscated program code, than could some corresponding user Sam who interacts with the same program locked up in the black box.

What's nice here is that this is we have the beginnings of an intuitive definition. In some sense the obfuscation should render the program itself basically unintelligible -- Alice should not get any more information from seeing the obfuscated program than Sam could get simply by interacting with its input/output interface. If Sam can 'learn' how the program works just by talking to it, even that's ok. What's not ok is for Alice to learn more than Sam.

The problem with this intuition is, that of course, it's hard to formulate. One thing you may have noticed is that the user Alice who views the obfuscated program truly does learn something more than the user Sam, who only interacts with it via the black box. When the experiment is over, the user who got the obfuscated program still has a copy of the obfuscated program. The user who interacted with the black box does not.

To give a practical example, let's say this program P is a certificate signing program that has a digital signing key hard-coded inside of it -- say, Trustwave's -- that will happily attach valid digital signatures on certificates (CSRs) of the user's devising. Both Alice and Sam can formulate certificate signing requests and send them into the program (either the obfuscated copy or the version in the 'black box') and get valid, signed certificates out the other end.

But here's the thing: when the experiment is over and we shut down Sam's access to the black box, he won't be able to sign any more certificates. On the other hand, Alice, who actually who learned the obfuscated program Obf(P), will still have the program! This means she can keep signing certificates on and forevermore. Knowing a program that can sign arbitrary certificates is a pretty darn significant 'piece' of information for Alice to have!

Worse, there's no way the 'black box' user Sam can ever learn a similar piece of information -- at least not if the signature scheme is any good. If Sam could ask the black box for a bunch of signatures, then somehow learn a program that could make more signatures, this would imply a fundamental weakness in the digital signature scheme. This cuts against a basic property we expect of secure signatures.

Barak et al. proposed a clever way to get around this problem -- rather than outputting an arbitrary piece of information, Alice and Sam would be restricted to outputting a single bit at the end of their experiment (i.e., computing a predicate function). This helps to avoid the problems above and seems generally like the "right" definition.

An impossibility result

Having proposed this nice definition, Barak et al. went on to do an irritating thing that cryptographers sometimes do: they proved that even their definition doesn't work. Specifically, they showed that there exist programs that simply can't be obfuscated under this definition.

The reason is a little wonky, but I'm going to try to give the flavor for it below.

Imagine that you have two programs A and B, where A is similar to our password program above. That is, it contains a hard-coded, cryptographic secret which we'll denote by password. When you run A(x), the program checks whether (x == password) and if so, it outputs a second cryptographically strong password, which we'll cleverly denote by password_two. If you run A on any other input, it doesn't output anything.

Now imagine the second program B works similarly to the first one. It contains both password and password_two hardcoded within it. A major difference is that B doesn't take a string as input. It takes another computer program. You feed a program in as input to B, which then:
  1. Executes the given program on input password to get a result r
  2. If (r == password_two)it outputs a secret bit.
It should be apparent to you that if you run the program B on input the program A -- that is, you compute B(A) -- it will always output the secret bit.* It doesn't even matter if either B or A has been obfuscated, since obfuscation doesn't change the behavior of the programs.

At the same time, Barak et al. pointed out that this trick only works if you actually have code for the program A. If I give you access to a black box that contains the programs A, B and will simply let you query them on chosen inputs, you're screwed. As long as password and password_two are cryptographically strong, you won't be able to get A to output anything in any reasonable amount of time, and hence won't be able to get B to output the secret bit.

What this means is that Alice, who gets the obfuscated copies of both programs, will always learn the secret bit. But if Sam is a reasonable user (that is, he can't run long enough to brute-force the passwords) he won't be able to learn the secret bit. Fundamentally, the obfuscated pair of programs always gives more information than black-box access to them.

It remains only to show that the two programs can be combined into one program that still can't be obfuscated. And this is what Barak et al. did, completing their work and showing the general programs can't be obfuscated using their definition.

Now you can be forgiven for looking askance at this. You might, for example, point out that the example above only shows that it's hard to obfuscate a specific and mildly ridiculous program. Or that this program is some kind of weird exception. But in general it's not. There are many other useful programs that also can't obfuscate, particularly when you try to use them in real systems. These points were quickly pointed out by Barak et al., and later in other practical settings by Goldwasser and Kalai, among others.

You might also think that obfuscation is plain impossible. But here cryptography strikes again -- it's never quite that simple.

We can obfuscate some things!

Before we completely write off black box obfuscation, let's take a moment to go back to the password checking program I showed at the beginning of this post.

Here's a slightly simpler version:

// Check a password and return true if it's correct, false otherwise
//
bool SuperSecretPasswordProtectedStuff(string passwd) {
  if (password == "0xt438fh27266629zn28366492923aai3jnqobbyc4t!") {
    return true;
  } else {
    return false;
  }
}

This function is an example of a 'point function': that is, a functions that returns false on most inputs, but returns true at exactly one point. As you can see, there is exactly one point (the correct password) that makes this routine happy.

Point functions are interesting for a two simple reasons: the first is that we use them all the time in real systems -- password checking programs being the obvious example. The second is that it turns out we can obfuscate them, at least under some strong assumptions. Even better, we can do it in a way that should be pretty familiar to real system designers.

Let be a secure hash function -- we'll get back to what that means in a second -- and consider the following obfuscated password checking routine:

// Check a password and return 'true' if it's correct, 'false' otherwise
// But do it all *obfuscated* and stuff
//
bool ObfuscatedSuperSecretPasswordProtectedStuff(string passwd) {
  static string HARDCODED_SALT          = 0x......; // this is a salt value
  static string HARDCODED_PASSWORD_HASH = 0x......; // this value is H(salt + target password)

  // First, hash the input password with the salt
  hashedPass = H(HARDCODED_SALT + passwd);

  if (hashedPass == HARDCODED_PASSWORD_HASH) {
    return true;
  } else {
    return false;
  }
}

Note that our 'obfuscated' program no longer stores the plaintext of the password. Instead we store its hash only (and salt), and wrap it inside a program that simply compares this hash to a hash of the user's input. This should be familiar, since it's the way you should be storing password files today.**

A number of cryptographers looked at formulations like this and showed the following: if the password is very hard to guess (for example, it's secret and drawn at random from an exponentially-sized space of passwords) and the hash function is 'strong' enough, then the hash-checking program counts as a secure 'obfuscation' of the basic password comparison program above it.

The intuition for this is fairly simple: imagine Alice and Sam don't know the password. If the password is strong, i.e., it's drawn from a large enough space, then their probability of ever getting the program to output 'true' is negligible. If we assume an ideal hash function -- in the simplest case, a random oracle -- then the hard-coded hash value that Alice learns is basically just a useless random string, and hides all partial input about the real password. This means, in practice, that any general question Alice can answer at the end of the experiment, Sam can answer with about the same probability.***

Finding better definitions

More than a decade after the definition was formulated, there are basically two kinds of results about 'strong' virtual-black-box obfuscation. The first set shows that it's impossible to do it for general programs, and moreover, that many of the interesting functions we want to obfuscate (like some signatures and pseudorandom functions) can't be obfuscated in this powerful sense.

The second class of results shows that we can black-box obfuscate certain functions, but only very limited ones like, say, point functions and re-encryption. These results are neat, but they're hardly going to set the world on fire.

What cryptographers have been trying to achieve since then is a different -- and necessarily weaker -- definition that could capture interesting things we wanted from obfuscating general programs, but without all the nasty impossibility results.

And that, finally, brings us to the advances described in the WIRED article.

Indistinguishability Obfuscation

In shooting down their own proposed definitions, Barak et al. also left some breadcrumbs towards a type of obfuscation that might actually work for general programs. They called their definition "indistinguishability obfuscation" (IO) and roughly speaking it says the following:

Imagine we have two programs C1, C2 -- which we'll describe as similarly-sized circuits -- that compute the same function. More concretely, let's say they have exactly the same input/output behavior, although they may be implemented very differently inside. The definition of indistinguishability obfuscation states that it should be possible to obfuscate the two circuits C1, C2 such that no efficient algorithm will be able to tell the difference between Obf(C1) from Obf(C2). 

While this idea was proposed years ago, nobody actually knew how to build any such thing, and it was left as one of those 'open problems' that cryptographers love to tear their hair out over. This remained the case until just last year, when a group of authors from UCLA, UT Austin and IBM Research proposed a 'candidate construction' for building such obfuscators, based on the new area of multilinear map-based cryptography.

Another interesting variant of this notion is called extractability obfuscation (EO), which implies not only that you can't distinguish between Obf(C1) and Obf(C2), but moreover, that if you could distinguish the two, then you could necessarily find an input value on which both C1 and C2 would produce different outputs. Moreover, other work indicates IO and EO give essentially the 'best possible' obfuscation you can provide for general programs.

The question you're probably asking is: so what? What can we do with indistinguishability obfuscation?

And this is where the WIRED article differs substantially from the reality. The truth is that IO will probably bring us major cryptographic and software advances -- it's already been shown to bring about succinct functional encryption for all circuits, as well as advances in deniable encryption. Moreover it can be used to build known forms of encryption such as public-key encryption based on new techniques that differ from what we use today -- for example, by obfuscating 'symmetric' primitives like pseudorandom functions.****

And if this doesn't seem quite as exciting as the WIRED article would imply, that's because we're still learning what the possibilities are. The new techniques could, for example, allow us to build exciting new types of security systems -- or else show us radical new ways to build systems that we already have. Even the latter is very important, in case advances in our field result in 'breaks' to our existing constructions.

What IO and EO will probably not do is make programs unhackable, since that implies something a whole lot stronger than either of these techniques currently provide. In fact, it's not clear what the heck you'd need to make programs unhackable.

And the reason for that is quite simple: even obfuscated software can still suck.

Notes:

Thanks to Susan Hohenberger and Zooko for help with this post. 

The bit in the Barak et al. proof doesn't have to be secret.

** With a pretty damn big caveat, which is that in real life (as opposed to cryptographic examples) users pick terrible passwords, which makes your password hashes vulnerable to dictionary attacks. We have a variety of countermeasures to slow down these attacks, including salt and computationally intensive password hashing functions.

*** God I hate footnotes. But it's worth unpacking this. Imagine that Alice learns 'some information' from seeing an average program containing the hash of a strong password. If the password is strong, then Alice can only guess the right input password with negligible probability. Then Sam can 'use' Alice to get the same information as follows. He formulates a 'fake' program that contains a completely bogus password hash and mails it to Alice. Whatever information she learns from his program, he takes and outputs as the information he 'learned'. It remains to show that if the password hash is strong (a random oracle), Alice isn't going to be able to learn more from a random hash than she would from the hash of a strong password.

**** The basic idea here is that symmetric encryption (like AES, say) can't be used for public key encryption, since anyone who learns your encryption key also learns your decryption key. But if you can obfuscate an encryption program that contains your symmetric encryption key, then the new program becomes like a public key. You can hand it out to people to encrypt with.

Saturday, January 25, 2014

A letter from US security researchers

This week a group of more than fifty prominent security and cryptography researchers signed a letter protesting the mass surveillance efforts of the NSA, and attempts by NSA to weaken cryptography and privacy protections on the Internet. The full letter can be found here.

Most of you have already formed your own opinions on the issue over the past several months, and it's unlikely that one letter is going to change that. Nonetheless, I'd like a chance to explain why this statement matters.

For academic professionals in the information security field, the relationship with NSA has always been a bit complicated. However, for the most part the public side of that relationship has been generally positive. Up until 2013 if you'd asked most US security researchers for their opinions on NSA, you would, of course, have heard a range of views. But you also might have heard notes of (perhaps grudging) respect. This is because many of the NSA's public activities have been obviously in everyone's interest -- helping to fund research and secure our information systems.

Even where evidence indicated the possibility of unfair dealing, most researchers were content to dismiss these allegations as conspiracy theories. We believed the NSA would stay between the lines. Putting backdoors into US information standards was possible, of course. But would they do it? We thought nobody would be that foolish. We were wrong.

In my opinion this letter represents more than just an appeal to conscience. It measures the priceless trust and goodwill the NSA has lost -- and continues to lose while our country fails to make serious reforms to this agency.

While I'm certain the NSA itself will survive this loss of faith in the short term, in the long term our economic and electronic security depend very much on the cooperation of academia, industry and private citizens. The NSA's actions have destroyed this trust. And ironically, that makes us all less safe.