Friday, September 28, 2012

On the (provable) security of TLS: Part 2

This is the second post in a series on the provable security of TLS. If you haven't read the first part, you really should!

The goal of this series is to try to answer an age-old question that is often asked and rarely answered. Namely: is the TLS protocol provably secure?

While I find the question interesting in its own right, I hope to convince you that it's of more than academic interest. TLS is one of the fundamental security protocols on the Internet, and if it breaks lots of other things will too. Worse, it has broken -- repeatedly. Rather than simply patch and hope for the best, it would be fantastic if we could actually prove that the current specification is the right one.

Unfortunately this is easier said than done. In the first part of this series I gave an overview of the issues that crop up when you try to prove TLS secure. They come at you from all different directions, but most stem from TLS's use of ancient, archaic cryptography; gems like, for example, the ongoing use of RSA-PKCS#1v1.5 encryption fourteen years after it was shown to be insecure.

Despite these challenges, cryptographers have managed to come up with a handful of nice security results on portions of the protocol. In the previous post I discussed Jonnson and Kaliski's proof of security for the RSA-based TLS handshake. This is an important and confidence-inspiring result, given that the RSA handshake is used in almost all TLS connections.

In this post we're going to focus on a similarly reassuring finding related to the the TLS record encryption protocol -- and the 'mandatory' ciphersuites used by the record protocol in TLS 1.1 and 1.2 (nb: TLS 1.0 is broken beyond redemption). What this proof tells us is that TLS's CBC mode ciphersuites are secure, assuming... well, a whole bunch of things, really.

The bad news is that the result is extremely fragile, and owes its existence more to a series of happy accidents than from any careful security design. In other words, it's just like TLS itself.

Records and handshakes 

Let's warm up with a quick refresher.

TLS is a layered protocol, with different components that each do a different job. In the previous post I mostly focused on the handshake, which is a beefed-up authenticated key agreement protocol. Although the handshake does several things, its main purpose is to negotiate a shared encryption key between a client and a server -- parties who up until this point may be complete strangers.

The handshake gets lots of attention from cryptographers because it's exciting. Public key crypto! Certificates! But really, this portion of the protocol only lasts for a moment. Once it's done, control heads over to the unglamorous record encryption layer which handles the real business of the protocol: securing application data.

Most kids don't grow up dreaming about a chance to work on the TLS record encryption layer, and that's fine -- they shouldn't have to. All the record encryption layer does is, well, encrypt stuff. In 2012 that should be about as exciting as mailing a package.

And yet TLS record encryption still manages to be a source of endless excitement! In the past year alone we've seen three critical (and exploitable!) vulnerabilities in this part of TLS. Clearly, before we can even talk about the security of record encryption, we have to figure out what's wrong with it.

Welcome to 1995
Development of the SSLv1
record encryption layer

The problem (again) is TLS's penchant for using prehistoric cryptography, usually justified on some pretty shaky 'backwards compatibility' grounds. This excuse is somewhat bogus, since the designers have actually changed the algorithms in ways that break compatibility with previous versions -- and yet retained many of the worst features of the originals.

The most widely-used ciphersuites employ a block cipher configured in CBC mode, along with a MAC to ensure record authenticity. This mode can be used with various ciphers/MAC algorithms, but encryption always involves the following steps:
  1. If both sides support TLS compression, first compress the plaintext.
  2. Next compute a MAC over the plaintext, record type, sequence number and record length. Tack the MAC onto the end of the plaintext.
  3. Pad the result with up to 256 bytes of padding, such that the padded length is a multiple of the cipher's block size. The last byte of the padding should contain the padding length (excluding this byte), and all padding bytes must also contain the same value. A padded example (with AES) might look like:

                0x MM MM MM MM MM MM MM MM MM 06 06 06 06 06 06 06
     
  4. Encrypt the padded message using CBC mode. In TLS 1.0 the last block of the previous ciphertext (called the 'residue') is used as the Initialization Vector. Both TLS 1.1 and 1.2 generate a fresh random IV for each record.
Upon decryption the attacker verifies that the padding is correctly structured, checks the MAC, and outputs an error if either condition fails. 

To get an idea of what's wrong with the CBC ciphersuite, you can start by looking at the appropriate section of the TLS 1.2 spec -- which reads more like the warning label on a bottle of nitroglycerin than a cryptographic spec. Allow me sum up the problems.

First, there's the compression. It's long been known that compression can leak information about the contents of a plaintext, simply by allowing the adversary to see how well it compresses. The CRIME attack recently showed how nasty this can get, but the problem is not really news. Any analysis of TLS encryption begins with the assumption that compression is turned off.

Next there's the issue of the Initialization Vectors. TLS 1.0 uses the last block of the previous ciphertext, which is absurd, insecure and -- worse -- actively exploitable by the BEAST attack. Once again, the issue has been known for years yet was mostly ignored until it was exploited.

So ok: no TLS 1.0, no compression. Is that all?

Well, we still haven't discussed the TLS MAC, which turns out to be in the wrong place -- it's applied before the message is padded and encrypted. This placement can make the protocol vulnerable to padding oracle attacks, which (amazingly) will even work across handshakes. This last fact is significant, since TLS will abort the connection (and initiate a new handshake) whenever a decryption error occurs in the record layer. It turns out that this countermeasure is not sufficient.

To deal with this, recent versions of TLS have added the following patch: they require implementers to hide the cause of each decryption failure -- i.e., make MAC errors indistinguishable from padding failures. And this isn't just a question of changing your error codes, since clever attackers can learn this information by measuring the time it takes to receive an error. From the TLS 1.2 spec:
In general, the best way to do this is to compute the MAC even if the padding is incorrect, and only then reject the packet. For instance, if the pad appears to be incorrect, the implementation might assume a zero-length pad and then compute the MAC. This leaves a small timing channel, since MAC performance depends to some extent on the size of the data fragment, but it is not believed to be large enough to be exploitable.
To sum up: TLS is insecure if your implementation leaks the cause of a decryption error, but careful implementations can avoid leaking much, although admittedly they probably will leak some -- but hopefully not enough to be exploited. Gagh!

At this point, just take a deep breath and say 'all horses are spherical' three times fast, cause that's the only way we're going to get through this.

Accentuating the positive

Having been through the negatives, we're almost ready to say nice things about TLS. Before we do, let's just take a second to catch our breath and restate some of our basic assumptions:
  1. We're not using TLS 1.0 because it's broken.
  2. We're not using compression because it's broken.
  3. Our TLS implementation is perfect -- i.e., doesn't leak any information about why a decryption failed. This is probably bogus, yet we've decided to look the other way.
  4. Oh yeah: we're using a secure block cipher and MAC (in the PRP and PRF sense respectively).**
And now we can say nice things. In fact, thanks to a recent paper by Kenny Paterson, Thomas Ristenpart and Thomas Shrimpton, we can say a few surprisingly positive things about TLS record encryption.

What Paterson/Ristenpart/Shrimpton show is that TLS record encryption satisfies a notion they call 'length-hiding authenticated encryption', or LHAE. This new (and admittedly made up) notion not only guarantees the confidentiality and authenticity of records, but ensures that the attacker can't tell how long they are. The last point seems a bit extraneous, but it's important in the case of certain TLS libraries like GnuTLS, which actually add random amounts of padding to messages in order to disguise their length.

There's one caveat to this proof: it only works in cases where the MAC has an output size that's greater or equal to the cipher's block size. This is, needless to say, a totally bizarre and fragile condition for the security of a major protocol to hang on. And while the condition does hold for all of the real TLS ciphersuites we use -- yay! -- this is more a happy accident than the result of careful design on anyone's part. It could easily have gone the other way.

So how does the proof work?

Good question. Obviously the best way to understand the proof is to read the paper itself. But I'd like to try to give an intuition.

First of all, we can save a lot of time by starting with the fact that CBC-mode encryption is already known to be IND-CPA secure if implemented with a secure block cipher (PRP). This result tells us only that CBC is secure against passive attackers who can request the encryption of chosen messages. (In fact, a properly-formed CBC mode ciphertext should be indistinguishable from a string of random bits.)

The problem with plain CBC-mode is that these security results don't hold in cases where the attacker can ask for the decryption of chosen ciphertexts.

This limitation is due to CBC's malleability -- specifically, the fact that an attacker can tamper with a ciphertext, then gain useful information by sending the result to be decrypted. To show that TLS record encryption is secure, what we really want to prove is that tampering gives no useful results. More concretely, we want to show that asking for the decryption of a tampered ciphertext will always produce an error.

We have a few things working in our favor. First, remember that the underlying TLS record has a MAC on it. If the MAC is (PRF) secure, then any ciphertext tampering that results in a change to this record data or its MAC will be immediately detected (and rejected) by the decryptor. This is good.

Unfortunately the TLS MAC doesn't cover the padding. To continue our argument, we need to show that no attacker can produce a legitimate ciphertext, and that includes tampering that messes with the padding section of the message. Here again things look intuitively good for TLS. During decryption, the decryptor checks the last byte of the padded message to see how much padding there is, then verifies that all padding bytes contain the same numeric value. Any tampering that affects this section of the plaintext should either:
  1. Produce inconsistencies in some padding bytes, resulting in a padding error, or
  2. Cause the wrong amount of padding to be stripped off, resulting in a MAC error.
Both of these error conditions will appear the same to the attacker, thanks to the requirement that TLS implementations hide the reason for a decryption failure. Once again, the attacker should learn nothing useful.

This all seems perfectly intuitive, and you can imagine the TLS developers making exactly this argument as they wrote up the spec. However there's one small exception to the rule above, which can turn TLS implementations that add an unnecessarily large amount of padding to the plaintext. (For example, GnuTLS.)

To give an example, let's say the unpadded record + MAC is 15 bytes. If we're using AES, then this plaintext can be padded with a single byte. Of course, if we're inclined to add extra padding, it could also be padded with seventeen bytes -- both are valid padding strings. The two possible paddings are presented below:

If the extra-long padding is used, the attacker could maul the longer ciphertext by truncating it -- throwing away the last 16-byte ciphertext block. Truncation is a viable way to maul CBC ciphertexts, since it has the same effect on the underlying plaintext. The CBC decryption of the truncated ciphertext would yield:


Which not very useful, since the invalid padding would lead to a decryption error. However, the attacker could fix this -- this time, using the fact that TLS can be mauled by flipping bits in the last byte of the Initialization Vector. Such an attack would allow the attacker to convert that trailing 0x10 into an 0x00. This result is now valid ciphertext that will not produce an error on decryption.


So what has the attacker learned by this attack? In practice, not very much. Mostly what they know is the length of the original ciphertext -- so much for GnuTLS's attempt to disguise the length. But more fundamentally: this attacker of 'mauling' the ciphertext totally screws up the nice idea we were going for in our proof.

So the question is: can this attack be used against real TLS? And this is where the funny restriction about MAC size comes into play.

You see, if TLS MACs are always bigger than a ciphertext block, then all messages will obey a strict rule: no padding will ever appear in the first block of the CBC ciphertext.

Since the padding is now guaranteed to start in the second (or later) block of the CBC ciphertext, the attacker cannot 'tweak' it by modifying the IV (this attack only works against the first block of the plaintext). Instead, they would have to tamper with a ciphertext block. And in CBC mode, tampering with ciphertext blocks has consequences! Such a tweak will allow the attacker to change padding bytes, but as a side effect it will cause one entire block of the record or MAC to be randomized when decrypted. And what Paterson/Ristenpart/Shrimpton prove is that this 'damage' will inevitably lead to a MAC error.

This 'lucky break' means that an attacker can't successfully tamper with a CBC-mode TLS ciphertext. And that allows us to push our way to a true proof of the CBC-mode TLS ciphersuites. By contrast, if the MAC was only 80 bits (as it is in some IPSEC configurations), the proof would not be possible. So it goes.

Now I realize this has all been pretty wonky, and that's kind of the point! The moral to the story is that we shouldn't need this proof in the first place! What it illustrates is how fragile and messy the TLS design really is, and how (once again) it achieves security by luck and the skin of its teeth, rather than secure design.

What about stream ciphers?

The good news -- to some extent -- is that none of the above problems apply to stream ciphers, which don't attempt to hide the record length, and don't use padding in the first place. So the security of these modes is much 'easier' to argue.

Of course, this is only 'good news' if you believe that the stream ciphers included with TLS are good in the first place. The problem, of course, is that the major supported stream cipher is RC4. I will leave it to the reader to decide if that's a worthwhile tradeoff.

In conclusion

There's probably a lot more that can be said about TLS record encryption, but really... I think this post is probably more than anyone (outside of the academic community and a few TLS obsessives) has ever wanted to read on the subject. 

In the next posts I'm going to come back to the much more exciting Diffie-Hellman handshake and then try to address the $1 million and $10 million questions respectively. First: does anyone really implement TLS securely? And second: when can we replace this thing?

Notes:

* One thing I don't mention in this post is the TLS 1.0 'empty fragment' defense, which actually works against BEAST and has been deployed in OpenSSL for several years. The basic idea is to encrypt an empty record of length 0 before each record goes over the wire. In practice, this results in a full record structure with a MAC, and prevents attackers from exploiting the residue bug. Although nobody I know of has ever proven it secure, the proof is relatively simple and can be arrived at using standard techniques.

** The typical security definition for a MACs is SUF-CMA (strongly unforgeable under chosen message attack). This result uses the stronger -- but also reasonable -- assumption that the MAC is actually a PRF.  

Thursday, September 6, 2012

On the (provable) security of TLS: Part 1

If you sit a group of cryptographers down and ask them whether TLS is provably secure, you're liable to get a whole variety of answers. Some will just giggle. Others will give a long explanation that hinges on the definitions of 'prove' and 'secure'. What you will probably not get is a clear, straight answer.

In fairness, this is because there is no clear, straight answer. Unfortunately, like all the things you really need to know in life, it's complicated.

Still, just because something's complicated doesn't mean that we get to ignore it. And the security of TLS is something we really shouldn't ignore --  because TLS has quietly become the most important and trusted security protocol on the Internet. While plenty of security experts will point out the danger of improperly configured TLS setups, most tend to see (properly configured) TLS as a kind of gold standard -- and it's now used in all kinds of applications that its designers would probably never have envisioned.

And this is a problem, because (as Marsh points out) TLS (or rather, SSL) was originally designed to secure $50 credit card transactions, something it didn't always do very well. Yes, it's improved over the years. On the other hand, gradual improvement is no substitute for correct and provable security design.

All of which brings us to the subject of this -- and subsequent -- posts: even though TLS wasn't designed to be a provably-secure protocol, what can we say about it today? More specifically, can we prove that the current version of TLS is cryptographically secure? Or is there something fundamental that's holding it back?

The world's briefest overview of TLS

If you're brand new to TLS, this may not be the right post for you. Still, if you're looking for a brief refresher, here it is:
TLS is a protocol for establishing secure (Transport layer) communications between two parties, usually denoted as a Client and a Server.
The protocol consists of several phases. The first is a negotiation, in which the two peers agree on which cryptographic capabilities they mutually support -- and try to decide whether a connection is even possible. Next, the parties engage in a Key Establishment protocol that (if successful) authenticates one or both of the parties, typically using certificates, and allows the pair to establish a shared Master Secret. This is done using a handful of key agreement protocols, including various flavors of Diffie-Hellman key agreement and an RSA-based protocol.

Finally, the parties use this secret to derive encryption and/or authentication keys for secure communication. The rest of the protocol is very boring -- all data gets encrypted and sent over the wire in a (preferably) secure and authenticated form. This image briefly sums it up:

Overview of TLS (source)
So why can't we prove it secure?

TLS sounds very simple. However, it turns out that you run into a few serious problems when you try to assemble a security proof for the protocol. Probably the most serious holdup stems from the age of the cryptography used in TLS. To borrow a term from Eric Rescorla: TLS's crypto is downright prehistoric.

This can mostly be blamed on TLS's predecessor SSL, which was designed in the mid-90s -- a period better known as 'the dark ages of industrial crypto'. All sorts of bad stuff went down during this time, much of which we've (thankfully) forgotten about. But TLS is the exception! Thanks to years of backwards compatibility requirements, it's become a sort of time capsule for all sorts of embarrassing practices that should have died a long slow death in a moonlit graveyard.

For example:
  1. The vast majority of TLS handshakes use an RSA padding scheme known as PKCS#1v1.5. PKCS#1v1.5 is awesome -- if you're teaching a class on how to attack cryptographic protocols. In all other circumstances it sucks. The scheme was first broken all the way back in 1998. The attacks have gotten better since.
     
  2. The (AES)-CBC encryption used by SSL and TLS 1.0 is just plain broken, a fact that was recently exploited by the BEAST attack. TLS 1.1 and 1.2 fix the glaring bugs, but still use non-standard message authentication techniques, that have themselves been attacked.
     
  3. Today -- in the year 2012 -- the 'recommended' patch for the above issue is to switch to RC4. Really, the less said about this the better.
Although each of these issues have all been exploited in the past, I should be clear that current implementations do address them. Not by fixing the bugs, mind you -- but ratherby applying band-aid patches to thwart the known attacks. This mostly works in practice, but it makes security proofs an exercise in frustration.

The second obstacle to proving things about TLS is the sheer complexity of the protocol. In fact, TLS is less a 'protocol' than it is a Frankenstein monster of of options and sub-protocols, all of which provide a breeding ground for bugs and attacks. Unfortunately, the vast majority of academic security analyses just ignore all of the 'extra junk' in favor of addressing the core crypto. Which is good! But also too bad -- since these options where practically every real attack on TLS has taken place.

Lastly, it's not clear quite which definition of security we should even use in our analysis. In part this is because the TLS specification doesn't exactly scream 'I want to be analyzed'. In part it's because definitions have been evolving over the years.

So you're saying TLS is unprovable?

No. I'm just lowering expectations.

The truth is there's a lot we do know about the security of TLS, some of it surprisingly positive. Several academic works have even given us formal 'proofs' of certain aspects of the protocol. The devil here lies mainly in the definitions of 'proof' and -- worryingly -- 'TLS'.

For those who don't live for the details, I'm going to start off by listing a few of the major known results here. In no particular order, these are:
  1. If properly implemented with a secure block cipher, the TLS 1.1/1.2 record encryption protocol meets a strong definition of (chosen ciphertext attack) security. Incidentally, the mechanism is also capable of hiding the length of the encrypted records. (Nobody even bothers to claim this for TLS 1.0 -- which everybody agrees is busted.)
     
  2. A shiny new result from CRYPTO 2012 tells us that (a stripped-down version of) the Diffie-Hellman handshake (DHE) is provably secure in the 'standard model' of computation. Moreover, combined with result (1), the authors prove that TLS provides a secure channel for exchanging messages. Yay!

    This result is important -- or would be, if more people actually used DHE. Unfortunately, at this point more people bike to work with their cat than use TLS-DHE.
     
  3. At least two excellent works have tackled the RSA-based handshake, which is the one most people actually use. Both works succeed in proving it secure, under one condition: you don't actually use it! To be more explicit: both proofs quietly replace the PKCS#v1.5 padding (in actual use) with something better. If only the real world worked this way.
     
  4. All is not lost, however: back in 2003 Jonnson and Kaliski were able to prove security for the real RSA handshake without changing the protocol. Their proof is in the random oracle model (no biggie), but unfortunately it requires a new number-theoretic hardness assumption that nobody has talked about or revisited since. In practice this may be fine! Or it may not be. Nobody's been able to investigate it further, since every researcher who tried to look into it... wound up dead. (No, I'm just kidding. Although that would be cool.) 
     
  5. A handful of works have attempted to analyze the entirety of SSL or TLS using machine-assisted proof techniques. This is incredibly ambitious, and moreover it's probably the only real way to tackle the problem. Unfortunately, the proofs hugely simplify the underlying cryptography, and thus don't cover the full range of attacks. Moreover, only computers can read them.
While I'm sure there are some important results I'm missing here, this is probably enough to take us 95% of the way to the 'provable' results on TLS. What remains is the details.

And oh boy, are there details. More than I can fit in one post. So I'm going to take this one chunk at a time, and see how many it takes.

An aside: what are we trying to prove, anyway?

One thing I've been a bit fast-and-loose on is: what are we actually proving about these protocols in the first place, anyway? The question deserves at least a few words -- since it's received thousands in the academic literature.

One obvious definition of security can be summed up as 'an attacker on the wire can't intercept modify data', or otherwise learn the key being established. But simply keeping the data secret isn't enough; there's also a matter of freshness. Consider the following attack:
  1. I record communications between a legitimate client and his bank's server, in which the client requests $5,000 to be transferred from her account to mine.
  2. Having done this, I replay the client's messages to the server a hundred times. If all goes well, I'm  a whole lot richer. 
Now this is just one example, but it shows that the protocol does require a bit of thinking. Taking this a step farther, we might also want to ensure that the master secret is random, meaning even if (one of) the Client or Server are dishonest, they can't force the key to a chosen value. And beyond that, what we really care about is that the protocol data exchange is secure.

Various works take different approaches to defining security for TLS. This is not surprising, given the 'security analysis' provided in the TLS spec itself is so incomplete that we don't quite know what the protocol was intended to do in the first place. We'll come back to these definitional issues in a bit.

The RSA handshake

TLS supports a number of different ciphersuites and handshake protocols. As I said above, the RSA-based handshake is the most popular one -- almost an absurd margin. Sure, a few sites (notably Google) have recently begun supporting DHE and ECDH across the board. For everyone else there's RSA.

So RSA seems as good a place as any to start. Even better, if you ignore client authentication and strip the handshake down to its core, it's a pretty simple protocol:


Clearly the 'engine' of this protocol is the third step, where the Client picks a random 48-byte pre-master secret (PMS) and encrypts it under the server's key. Since the Master Secret is derived from this (plus the Client and Server Randoms), an attacker who can't 'break' this encryption shouldn't be able to derive the Master Key and thus produce the correct check messages at the bottom.

So now for the $10,000,000 question: can we prove that the RSA encryption secure? And the simple answer is no, we can't. This is because the encryption scheme used by TLS is RSA-PKCS#1v1.5, which is not -- by itself -- provably secure.

Indeed, it's worse than that. Not only is PKCS#1v1.5 not provably secure, but it's actually provably insecure. All the way back in 1998 Daniel Bleichenbacher demonstrated that PKCS#1v1.5 ciphertexts can be gradually decrypted, by repeatedly modifying them and sending the modified versions to be decrypted. If the decryptor (the server in this case) reveals whether the ciphertext is correctly formatted, the attacker can actually recover the encrypted PMS.

              0x 00 02 { at least 8 non-zero random bytes } 00 { two-byte length } { 48-byte PMS }
RSA-PKCS #1v1.5 encryption padding for TLS

Modern SSL/TLS servers are wise to this attack, and they deal with it in a simple way. Following decryption of the RSA ciphertext they check the padding. If it does not have the form above, they select a random PMS and go forward with the calculation of the Master Secret using this bogus replacement value. In principle, this means that the server calculates a basically random key -- and the protocol doesn't fail until the very end.

In practice this seems to work, but proving it is tricky! For one thing, it's not enough to treat the RSA encryption as a black box -- all of these extra steps and the subsequent calculation of the Master Secret are now intimately bound up with the security of the protocol, in a not-necessarily-straightforward way.

There are basically two ways to deal with this. Approach #1, taken by Morrisey, Smart and Warinschi and Gajek et al., follows what I call the 'wishful thinking' paradigm. Both show that if you modify the encryption scheme used in TLS -- for example, by eliminating the 'random' encryption padding, or swapping in a CCA-secure scheme like RSA-OAEP -- the handshake is secure under a reasonable definition. This is very interesting from an academic point of view; it just doesn't tell us much about TLS.

Fortunately there's also the 'realist' approach. This is embodied by an older CRYPTO paper by Jonnson and Kaliski. These authors considered the entirety of the TLS handshake, and showed that (1) if you model the PRF (or part of it) as a random oracle, and (2) assume some very non-standard number-theoretic assumptions, and (more importantly) (3) the TLS implementation is perfect, then you can actually prove that the RSA handshake is secure.

This is a big deal!

Unfortunately it has a few problems. First, it's highly inelegant, and basically depends on all the parts working in tandem. If any part fails to work properly -- if for example, the server leaks any information that could indicate whether the encryption padding is valid or not -- then the entire thing crashes down. That the combined protocol works is in fact, more an accident of fate than the result of any real security engineering.

Secondly, the reduction extremely 'loose'. This means that a literal interpretation would imply that we should be using ginormous RSA keys, much bigger than we do today. We obviously don't pay attention to this result, and we're probably right not to. Finally, it requires that we adopt odd number-theoretic assumption involving a 'partial-RSA decision oracle', something that quite frankly, feels kind of funny. While we're all guilty of making up an assumption from time to time, I've never seen this assumption either before or since, and it's significant that the scheme has no straightforward reduction the RSA problem (even in the random oracle model), something we would get if we used RSA-OAEP.

If your response is: who cares -- well, you may be right. But this is where we stand.

So where are we?

Good question. We've learned quite a lot actually. When I started this post my assumption was that TLS was going to be a giant unprovable mess. Now that I'm halfway through this summary, my conclusion is that TLS is, well, still mostly a giant mess -- but one that smacks of potential.

Of course so far I've only covered a small amount of ground. We still have yet to cover the big results, which deal with the Diffie-Hellman protocol, the actual encryption of data (yes, important!) and all the other crud that's in the real protocol.

But those results will have to wait until the next post: I've hit my limit for today.

Click here for Part 2.

Sunday, September 2, 2012

Hey Amazon: Banning Security Researchers Isn't Making Us Safer

Readers of this blog may recall that I'm a big fan of the RSA-key 'cracking' research of Nadia Heninger, Zakir Durumeric, Eric Wustrow and Alex Halderman. To briefly sum it up: these researchers scanned the entire Internet, discovering nearly 30,000 weak RSA keys installed on real devices. Which they then factored.

In the fast-paced world of security, this is already yesterday's news. The problems have been responsibly disclosed and repaired, and the manufacturers have promised not to make, well, this particular set of mistakes again. The research even received the Best Paper award at Usenix Security.** So you might ask why I'm writing about it now. And the answer is: I'm not.

What I'm writing about today is not the research itself, but rather: the blowback from the research. You see, Heninger et al. were able to conduct their work mostly thanks resources rented from Amazon's Elastic Compute Cloud (EC2). And in response, Amazon has booted them off the service.

This is a real drag, and not just for the researchers in question.

Cloud services like EC2 are a huge resource for ethical security researchers. They help us to learn things about the Internet on a scale that we could never accomplish with the limited resources in most university labs. Cloud services also give us access to software and hardware that would be nigh on impossible to justify to a grant committee, stuff like GPU cluster instances which are invaluable to cryptographers who want to run specialized cracking tasks.

But more importantly: the rise of cloud computing has given rise to a whole new class of security threat: things we never had to worry about before, like side-channel and covert channel attacks between co-located VMs. Securing the cloud itself requires real-world analysis, and this means that researchers have to be trusted to do some careful, non-malicious work on actual platforms like EC2. Unfortunately this is just kind of research that the Heninger et al. ban could serve to discourage.

Now, I don't pretend that I know all the details of this particular case. I haven't spoken to the researchers about it, and although the paper makes their scan seem pretty benign, it's always possible that it was more aggressive than it should have been.*

Moreover, I can't challenge Amazon's right to execute this ban. In fact their Acceptable Use Policy explicitly prescribes security scans under a section titled 'No Security Violations':

  • Unauthorized Access. Accessing or using any System without permission, including attempting to probe, scan, or test the vulnerability of a System or to breach any security or authentication measures used by a System.

The question here is not whether Amazon can do this. It's whether their -- or anyone else's -- interests are being served by actually going through with such a ban. The tangible result of this one particular research effort is that thousands of vulnerable systems became secure. The potential result of Amazon's ban is that millions of systems may remain insecure.

Am I saying that Amazon should let researchers run amok on their network? Absolutely not. But there has to be a balance between unfettered access and an outright ban. I think we'll all be better off if Amazon can clearly articulate where that balance is, and provide us with a way to find it.

Update (9/3): Kenn White points me to this nice analysis of the public EC2 image-set. The authors mention that they worked closely with Amazon Security. So maybe this is a starting point.

Notes:

* Admittedly, this part is a little bit ambiguous in their paper. NMAP host discovery can be somewhere between gentle poke and 'active scrub' depending on the options you've set.

** In case you haven't seen it, you may also want to check out Nadia's (NSFW?) Usenix/CRYPTO rump session talk.