Friday, January 25, 2013

In defense of Provable Security

It's been a long time with no blogging, mostly thanks to travel and deadlines. In fact I'm just coming back from a workshop in Tenerife, where I learned a lot. Some of which I can't write about yet, but am really looking forward to sharing with you soon.

During the workshop I had some time to talk to Dan Bernstein (djb), and to hear his views on the relevance of provable security. This is a nice coincidence, since I notice that Dan's slides have been making the rounds on Twitter -- to the general approval of some who, I suspect, agree with Dan because they think that security proofs are hard.

The problem here is that this isn't what Dan's saying. Part of the issue is that his presentation is short, so it's easy to misinterpret his position as a call to just start designing cryptosystems the way we design software. That's not right, or if it is: get ready for a lot of broken crypto.

This post is my attempt to explain what Dan's saying, and then (hopefully) convince you he's not recommending the crazy things above.

There's no such thing as a "security proof"

Dan's first point is that we're using the wrong nomenclature. The term 'security proof' is misleading in that it gives you the impression that a scheme is, well... provably secure. There aren't many schemes that can make this claim (aside from the One-Time Pad). Most security proofs don't say this, and that can lead to misunderstandings.

The proofs that we see in day-to-day life are more accurately referred to as security reductions. These take something (like a cryptographic scheme) and reduce its security to the hardness of some other problem -- typically a mathematical problem, but sometimes even another cryptosystem.

A classic example of this is something like the RSA-PSS signature, which is unforgeable if the RSA problem is hard, or Chaum-van Heijst-Pfitzmann hashing, which reduce to the hardness of the Discrete Logarithm problem. But there are more complex examples like block cipher modes of operation, which can often be reduced to the (PRP) security of a block cipher.

So the point here is that these proofs don't actually prove security -- since the RSA problem or Discrete Log or block cipher could still be broken. What they do is allow us to generalize: instead of analyzing many different schemes, we can focus our attention one or a small number of hard problems. In other words, it's a different -- and probably much better -- way to allocate our (limited) cryptanalytic effort.

But we don't study those problems well, and when we do, we break them

Dan argues that this approach is superficially appealing, but concretely it can be harmful. Take the Chaum et al. hash function listed above. Nobody should ever use this thing: it's disastrously slow and there's no solid evidence that it's truly more secure than something like SHA-3 or even SHA-3's lamest competitors.

And here (unfortunately) he's got some evidence on his side: we've been amazingly unsuccessful at cryptanalyzing complex new cipher/hash primitives like AES, BLAKE and Keccak, despite the fact that these functions don't have [real] security proofs. Where we make cryptanalytic progress, it's almost always on first-round competition proposals, or on truly ancient functions like MD5. Moreover, if you take a look at 'provably-secure' number theoretic systems from the same era, you'll find that they're equally broken -- thanks to bad assumptions about key and parameter sizes.

We've also gotten pretty good at chipping away at classic problems like the Discrete Logarithm. The charitable interpretation is that this is a feature, not a bug -- we're focusing cryptanalytic effort on those problems, and we're making progress, whereas nobody's giving enough attention to all these new ciphers. The less charitable interpretation is that the Discrete Logarithm problem is a bad problem to begin with. Maybe we're safer with unprovable schemes that we can't break, then provable schemes that seem to be slowly failing.

You need a cryptanalyst...

This is by far the fuzziest part (for me) of what Dan's saying. Dan argues that security proofs are a useful tool, but they're no substitute for human cryptanalysis. None of which I would argue with at all. But the question is: cryptanalysis of what?

The whole point of a security reduction is to reduce the amount of cryptanalysis we have to do. Instead of a separate signature and encryption scheme to analyze, we can design two schemes that both reduce to the RSA problem, then we can cryptanalyze that. Instead of analyzing a hundred different authenticated cipher modes, we can simply analyze one AES -- and know that OCB and GCM and CBC and CTR will all be secure (for appropriate definitions of 'secure').

This is good, and it's why we should be using security proofs. Not to mislead people, but to help us better allocate our very scarce resources -- of smart people who can do this work (and haven't sold out to the NSA).

...because people make mistakes

One last point: errors in security proofs are pretty common, but this isn't quite what Dan is getting at. We both agree that this problem can be fixed, hopefully with the help of computer-aided proof techniques. Rather, he's concerned that security proofs only prove that something is secure within a given model. There are  many examples of provably-secure schemes that admit attacks because those attacks were completely outside of that threat model.

As an example, Dan points to some older EC key agreement protocols that did not explicitly include group membership tests in their description. Briefly, these schemes are secure if the attacker submits valid elements of an elliptic curve group. But of course, a real life attacker might not. The result can be disastrously insecure.

So where's the problem here? Technically the proof is correct -- as long as the attacker submits group elements, everything's fine. What the protocol doesn't model is the fact that an attacker can cheat -- it just assumes honesty. Or as Dan puts it: 'the attack can't even be modeled in the language of the proof'.

What Dan's not saying

The one thing you should not take away from this discussion is the idea that security proofs have no value. What Dan is saying is that security proofs are one element of the design process, but not 100% of it. And I can live with that.

The risk is that some will see Dan's talk as a justification for using goofy, unprovable protocols like PKCS#1v1.5 signature or the SRP password protocol. It's not. We have better protocols that are just as well analyzed, but actually have a justification behind them.

Put it this way: if you have a choice between driving on a suspension bridge that was designed using scientific engineering techniques, and one that simply hasn't fallen down yet, which one are you going to take? Me, I'll take the scientific techniques. But I admit that scientifically-designed bridges sometimes do fall down.

In conclusion

While I've done my best to sum up Dan's position, what I've written above is probably still a bit inaccurate. In fact, it's entirely possible that I've just constructed a 'strawman djb' to argue with here. If so, please don't blame me -- it's a whole lot easier to argue with a straw djb than the real thing.

Friday, January 4, 2013

Surveillance works! Let's have more of it.

If you care about these things, you might have heard that Google recently detected and revoked a bogus Google certificate. Good work, and huge kudos to the folks at Google who lost their holidays to this nonsense.

From what I've read, this particular incident got started in late 2011 when Turkish Certificate Authority TURKTRUST accidentally handed out intermediate CA certificates to two of their customers. Intermediate CA certs are like normal SSL certs, with one tiny difference: they can be used to generate other SSL certificates. Oops.

One of the recipients noticed the error and reported it to the CA. The other customer took a different approach and installed it into an intercepting Checkpoint firewall to sniff SSL-secured connections. Because, you know, why not.

So this is bad but not exactly surprising -- in fact, it's all stuff we've seen before. Our CA system has far too many trust points, and it requires too many people to act collectively when someone proves untrustworthy. Unless we do something, we're going to see lots more of this.

What's interesting about this case -- and what leads to the title above -- is not so much what went wrong, but rather, what went right. You see, this bogus certificate was detected, and likely not because some good samaritan reported the violation. Rather, it was (probably) detected by Google's unwavering surveillance.

The surveillance in question is conducted by the Chrome brower, which actively looks out for attacks and reports them. Here's the money quote from their privacy policy:
"If you attempt to connect to a Google website using a secure
connection, and the browser blocks the connection due to information
that indicates you are being actively attacked by someone on the
network (a “man in the middle attack”), Chrome may send information
about that connection to Google for the purpose of helping to
determine the extent of the attack and how the attack functions."
The specific technical mechanism in Chrome simple: Chrome ships with a series of 'pins' in its source code (thanks Moxie, thanks Tom). These tell Chrome what valid Google certificates should look like, and help it detect an obviously bogus certificate. When Chrome sees a bogus cert attached to a purported Google site, it doesn't just stop the connection, it rings an alarm at Google HQ.

And that alarm is a fantastic thing, because in this case it may have led to discovery and revocation before this certificate could be stolen and used for something worse.

Now imagine the same thing happening in many other browsers, and not just for Google sites. Well, you don't have to imagine. This is exactly the approach taken by plugins like Perspectives and Convergence, which monitor users' SSL connections in a distributed fashion to detect bogus certificates. These plugins are great, but they're not deployed widely enough. The technique should be standard in all browsers, perhaps with some kind of opt in. (I certainly would opt.)

The simple fact is that our CA system is broken and this is what we've got. Congratulations to Google for taking a major first step in protecting its users. Now let's take some more.