Bram Cohen Corrected

Update 11/24/11: Bram has responded my criticisms in this post. See the comments section below for his thoughts and my response. Also, see this new post for my more detailed response.

I was innocently Googling the title of my blog (yes, I do this, it’s sad) when I came across this May 2011 post by Bram Cohen entitled “Practical Cryptography Corrected“.

Ostensibly the post is a criticism of Bruce Schneier’s book Practical Cryptography. Instead, it’s proof that inventing the world’s most useful peer-to-peer protocol does not make you a cryptographer.  More worrisome than that, it’s an example of someone with a big megaphone using it to give really bad advice.

Bram’s post has ten points, of which four are (somewhat) problematic.  Let’s take them one at a time.

Bram’s Practical Advice #1: For an RSA exponent, you should always use 2. Technically that’s Rabin-Williams, and requires slightly different implementation, but that actually works in its favor. Rabin-Williams has a reduction to factoring, RSA does not.

No, no, no.  A thousand times no.

If you’re going to encrypt with RSA, you should encrypt with RSA.  That means using a standard, well-tested RSA implementation with the traditional RSA public exponent (e = 0x10001).*

Unless you’re an expert, do not go switching your exponents around.  Rabin-Williams (public exponent 2) looks like RSA, but it’s a very different cryptosystem — both in how decryption works and in what goes wrong if you screw up your implementation.

For one thing, when you implement Rabin-Williams incorrectly, it becomes vulnerable to a chosen ciphertext attack that completely recovers the scheme’s secret key.  RSA does not have this problem.  This issue can be managed through skillful use of padding**, but your typical from-scratch implementer is not skillful.  Even bad implementations of good padding can lead to problems.  Why chance it?

Which brings me to my first law: The probability that your RSA ciphertext will be broken via some number-theoretical attack is vanishingly small.  The probability that it will be broken because you screwed up your implementation, on the other hand, is high.  All current approaches to solving the RSA problem work by attacking the factoring problem anyway, so it’s unclear what advantage we’re buying ourselves here.

Bram’s Practical Advice #2: You should always do encryption as a layer outside of authentication.

Cryptographers have been arguing the order of encryption and authentication for nearly thirty years.  Should you encrypt first, then apply authentication (a MAC)? Or should you authenticate first, then encrypt?

By 2000 we had basically put this question to bed.  The recommended best practice? Always encrypt first (with a strong cipher and mode of operation), then apply a MAC to the resulting ciphertext.  This isn’t just easy to remember, it’s provably secure against adaptive chosen ciphertext attacks.  That means you’re protected against very real issues like padding oracle attacks.  Bram’s technique doesn’t give you this, and that can spell disaster.

Bram’s Practical Advice #3: For an encryption mode, you should always use CTR, and always use a nonce of zero, and never reuse keys.

Bram’s Practical Advice #4: You should never use the same key for both encryption and authentication. If you need both encryption and authentication, you should use two keys. One for encryption, and one for authentication.

This one’s a twofer, and technically none of it is bad advice.  I’m a fan of CTR mode, and zero IVs are just fine in CTR (but never in CBC!) mode.

I have only two points to make: first, why not use a random IV?  What’s going to happen to you if you hard-code a zero IV, and then the next (idiot) developer switches your mode of operation to CBC?  This shouldn’t happen, but it might.

But more generally, my advice is not to use CTR at all.  Use a provably-secure authenticated mode of operation like CCM, OCB or (best) GCM, in the way recommended by standards.  These modes handle both encryption and authentication for you, and better yet — they all use a single key (by design). You’re much better off using these standards than hand-rolling your own encrypt/MAC approach (especially if — per Bram’s Advice #2) you plan on doing it wrong.

In Summary

I have a lot of respect for Bram, and if I come off a bit critical in the post above, it’s isn’t because I dislike the guy.  In fact, many of his remaining points are good.

But the only thing worse than bad advice is bad advice from a voice of authority.  And Bram has a lot of authority, which could lead people to trust him over some goofy, non-famous cryptographer (or even Bruce Schneier, for some crazy reason).

Fixing people’s cryptographic mistakes has given me a good living so far, but that doesn’t mean I like it.  So please, please don’t listen to Bram.  Do things right, follow standards, and most importantly: if you don’t know exactly what you’re doing, don’t do it at all.

Notes:

* I initially wrote “e = 100001” which is totally bogus.  Goes to show you shouldn’t trust a blog post.  Thanks to GSBabil.

** Use of padding, incidentally, completely breaks the reduction to factoring unless you conduct the proof in the Random Oracle Model.  And as I discuss here, there are some serious problems with that.

13 thoughts on “Bram Cohen Corrected

  1. Hi Matthew,
    I think you meant to say – a more commonly used RSA public exponent (e) is 0x10001 (10001 with 3 zeros, 65537 in Decimal).

    Also, I am not 100% sure if RSA is vulnerable to choosing 'e' to the lower values. As long as padding is performed correctly (or as you say, skillfully), I think any RSA compatible 'e' will provide equally strong encryption.

  2. Whew, thanks GSBabil! I corrected the post.

    The major difference is that in any flavor of RSA, the exponent “e” has to be co-prime to the Euler totient (p-1)(q-1). “e = 2” is not co-prime to (p-1)(q-1) and so doesn't meet that requirement — so you need an alternative decryption algorithm. That, and the reduction to factoring, is why Rabin (Rabin-Williams) is a fundamentally different encryption scheme than RSA.

    But I agree that if you implement your padding correctly, i.e., re-construct, check, and don't leak any useful information through padding-check errors, you're fine using Rabin-OAEP. The question is: are you sure you want to do this? What advantage are you getting for trading in a carefully-implemented cryptosystem (RSA) for Rabin? In my view, the theoretical benefits are insignificant compared to the practical risks.

  3. Matthew, first of all, you're being extremely dismissive of any practical concerns here. Practical crypto deployments bottleneck on the server doing modular exponentiations, and if you use the standard exponent over plain old 2, then you need *five times* as much computing power, which frequently translates into five times as many real machines consuming five times the power in five times as large of a colo space. To skip out on that advantage you better have a very good reason, and all you give is rank speculation that maybe you're more likely to have a secure implementation if you use a 'standard' exponent. If you don't have a secure implementation, you're screwed anyway, and the techniques necessary for a good implementation – do an all or nothing transform, don't check that high byte on decode – is the same for rabin-williams and RSA. You might notice that I made a big point of calling out that implementing RSA is very error-prone and anyone considering doing it needs to learn about it, a subject which Practical Cryptography completely irresponsibly doesn't even touch on.

    Doing encrypt-then-authenticate looks a lot like authenticate-then-encrypt except that the authentication is plaintext, which obviously opens it to attacks which involve guessing the plaintext and verifying it because the authentication is right there, and you can extend those to modifying the ciphertext and replacing the authentication. I simply do not believe that making yourself susceptible to those problems is a win. The other issue with encrypt-then-authenticate is that it makes the code more complicated and just plain uglier, which directly makes you more susceptible to the #1 cause of actual breaks, implementation error.

    The problems with fancy modes is that they require variable length extra data for the authentication, tend not to force you to pad to a multiple of block size which causes implementation headaches, and are very complicated, all of which lead to potential implementation errors, which is the thing I'm trying to avoid.

    As far as the advice of using standards – Anyone reading Practical Cryptography is already going to be ignoring that advice!

    Also, I don't see why you're implying that Schneier is a great person to take advice from. Applied Cryptography has an erratum half the length of the book, and a bunch of the suggestions in Practical Cryptography are completely daft, which was why I made that post.

  4. Oddly, the bit about encrypt then authenticate is one piece of advice I took the most heat for, but it's one where I agree with Practical Cryptography! I just gave the advice up front instead of hemming and hawing for pages first.

    Also, it was probably a bit unfair of me to bash on you about the CPU usage issues in RSA, because I didn't explain that in my post. If you're doing something where the bottleneck isn't CPU on a single side of the transaction, you're best off using ECC, specifically curve25519/ed25519, because those have lower overall CPU load and a bunch of other good features.

  5. Hi Bram,
    First of all, happy Thanksgiving!

    Forgive me if I take your points out of order. I think the most important one is the authenticate-then-encrypt issue. Symmetric encryption is relatively simple. It's hard to screw up. These days, getting the authentication wrong/out of order is one of the most common ways it does go wrong in deployed products. If you don't believe me, see here:

    https://blog.cryptographyengineering.com/2011/10/attack-of-week-xml-encryption.html

    The typical story is that somebody authenticates their plaintext, then they encrypt it. They see these as two logically separate operations, and moreover, they think that the point of authentication is just to keep attackers from submitting invalid messages. They don't even know about active attacks.

    Inevitably the encryption process embeds some sort of plaintext padding or encoding, and developers don't pay explicit attention to this. And really, why should they!? In a perfect world, your authentication procedure should be self-contained and modular — it shouldn't depend on the nuts and bolts of how your encryption scheme works.

    The end result is that people get it wrong. They authenticate the message but not the padding. Then the entire scheme becomes vulnerable to a padding oracle attack. This stuff is pretty embarrassing.

    On the other hand, if you simply encrypt first, then authenticate your ciphertext, all of these issues go away. You don't have to worry about how your encryption scheme works. You can /replace/ your encryption scheme if you decide to upgrade. It's totally modular. And it's secure against adaptive chosen ciphertext attacks — in fact, it's provably IND-CCA secure.

    So while it's possible to get authenticate-then-encrypt right /if you're very careful/, why should we promulgate an approach that clearly leads to problems when people make (honest, simple) mistakes? Encrypt-then-authenticate is more secure, /and/ it's easier to remember and implement.

  6. As to the second point: RSA vs. Rabin-Williams. I've given this a lot of thought since I wrote the above post.

    None of it has made me side with the Rabin point of view.

    Don't get me wrong — I love the Rabin scheme. I love that it has a security proof based on factoring. I think it's fun to explain on a whiteboard. But to implement? I would have reservations.

    First of all, there's the theoretical. Or I should say, the relative lack of it. I was able to find one paper demonstrating that OAEP is secure for the Rabin function, but it's only one paper and at a late date. This is perfectly reasonable (and if you don't believe it, use OAEP+), but it indicates how little interest there's been /in the crypto/ community when it comes to practical issues related to Rabin. (Dan Boneh has a specialized padding scheme for Rabin called SAEP, however.)

    But the practical is what interests me more.

    First of all, Rabin-Williams is really, hugely vulnerable to chosen ciphertext attacks. This is a feature of the security proof, not a bug in the scheme. You can fix it with padding, but (as I've written elsewhere), padding is /really, really/ easy to screw up. All it takes is a little bit of leakage from an OAEP implementation (possibly through a timing channel) and your attacker can decrypt a 1024-bit ciphertext in about 1100 queries.

    In the case of RSA, this is bad — an annoyance. People can decrypt messages with some effort. In the case of Rabin-Williams it can be a disaster — it means you can recover the scheme's secret key. That's a big difference.

    Are you confident in your padding implementation? Go spend some time poking through the NSS implementation of PKCS#1 v1.5 or OAEP. Come back and tell me that this library — written by professionals and widely used — is 100% free of padding flaws. If you do, I have a bridge to sell you.

    Now, imagine what you're going to get if people start rolling their own implementations of things. Maybe I'm just paranoid, but I see things being much worse. And I'm not saying this as: “I imagine there might be serious problems”, but rather as “I've done this for a few years and I've /seen/ serious problems”.

    So the remaining issue is: should you use Rabin because it's more efficient? That's a tough one. CPU cycles are pretty cheap these days. Before you go doing something risky and non-standard, my advice is to be sure that you need to do it.

    RSA encryption with a standard modulus is /fast/. Are you really implementing a system that's really so CPU-constrained that you need a different cryptographic scheme? Well, in that case I'd be inclined to take your advice, Bram — use ECC with a standard implementation. You'll also get smaller ciphertexts. Don't go off writing your own Rabin-Williams implementation. Please!

  7. Anyway, I hope I've covered the basics.

    These are real issues and I take them seriously. On the one hand, I love it when people do non-standard things. It means that I, as the crypto guy, get to be relevant. Usually it's just the software guys who get to break things.

    On the other hand, it's relatively straightforward to build solid crypto implementations these days — so much so that it amazes me when people get it wrong. Maybe I'll write a post about it.

    Last point: sorry if my post came off rude. Maybe it was over the top. I certainly never meant to bash you Bram, only the advice. If some of it came from Practical Cryptography, then you can consider Ferguson and Schneier officially based too.

    Have a great holiday!

  8. Thanks for your replies Matthew, and sorry for hassling you publically – I assumed that since you hadn't replied after a while you were never going to.

    Fair enough with the potential for attacks on Rabin-Williams resulting in recovering of the key, rather than just doing a decryption. It *should* be possible to avoid that with a suitable implementation, and it's still a factor of 2 CPU usage difference against using RSA with an exponent of 3, so in cases where the bottleneck is CPU on the server I'm still going to recommend Rabin-Williams, but with the caveat that you should be very careful about how you do things. In the more common instance of there being either plenty of CPU around or the CPU load being more evenly distributed, ECC clearly wins. Perhaps advice to n00bs should emphasize that part.

    I have run the numbers of CPU usage, by the way, and there's about a factor of 5 difference between reasonable security parameters for rabin vs. ecc. Partially that was me reducing the security parameter a bit on rabin and leaving curve25519 unchanged, but there's definitely a big difference, although it disappears completely if you're comparing the standard exponent RSA to ecc, in that case ecc just wins even for CPU on the server, in addition to having smaller data sizes and less concern about side channel attacks.

    That said, I see no benefit of the 'standard' RSA exponent over using 3. In fact, the paper on OAEP+ – http://www.shoup.net/papers/oaep.pdf is able to show a reduction for RSA-OAEP with an exponent of 3 but not with a higher exponent. I take this more of a commentary on the limits of proof schemes though. Just because you can prove something about one scheme and not another does not mean that the first one is more secure, it might in fact be much less secure, it's just that your proof is able to say something about it but not the other. Given how weak the tools of computational complexity are, I take all that stuff with a grain of salt. Of course, in the absence of any reason to believe there's something wrong with it, one should use OAEP+ over OAEP, but I don't actually believe that there is a difference.

    (I'm a little bit aggro about this subject in general, because it doesn't take much of a conspiracy theorist to think that the advice for the larger exponent was because the people originally giving it were selling crypto accelerators, which gave them more than a little bit of a conflict of interest, and the fact that RSA was patented and Rabin-Williams was not probably didn't help either.)

    As far as padding goes, the only thing which should be signed/encrypted is symmetric key information or a hash, anything else is simply asking for trouble. Directly encrypting a PKCS#1, XML, or ASN.1 message directly is simply insane. This speaks to a bit of a difference in perspectives we have here – I'm assuming free reign to ignore any standard advice which is simply awful, and that the protocol will be mostly solidified once I'm done with it, while you're assuming that some standards must be followed for political reasons and that later developers will make random changes on the scale of changing the encryption mode. That's a level of out of control insanity I don't care to contemplate, so I won't, but it does bring into the points about order of encryption versus authentication, which I'll do in a separate comment because this is getting kind of long.

  9. Argh, it looks like my earlier comment didn't post. Quick summary: e=3 is always better than 65537. Yes the key recovery is a concern. Rabin is still faster than ECC for the server but for anything with distributed CPU or not bottlenecked on CPU ECC is better. Also, the only thing which should be encrypted is bootstrap info for symmetric crypto, using PKCS#1, ASN.1 or XML is insane.

    Anyhow, after a break for thanksgiving feast, on to order of operations. I've given this quite a bit more though, and believe that the intuition behind encrypt then authenticate is that if you do authenticate then encrypt, then a chosen ciphertext attack is implicitly much stronger, because it busts through both layers. Given that chosen ciphertext attacks really do happen, this is a legitimate cause for concern. The counterargument is that doing authentication second puts the authentication in the clear, so an attacker can guess what the data being authenticated is, and proceed to do further attacks from there.

    It all boils down to implementation, and what's more likely to work. Based on your later comments, it's pretty clear that we're coming at this from different assumptions. I never, ever, use block chaining, and always use counter mode, so the whole swath of issues around padding simply aren't a problem. If I had a manager who had heard that cipher block chaining was better and required that I use it, it's likely that I would throw up my hands at trying to analyze it and simply take your advice, hold my nose, and hope that I hadn't screwed anything up, because I doubt I could ever convince myself that I'd analyzed everything properly.

    People thinking that cipher block chaining actually buys you something is one of the horrible legacies of Applied Cryptography, and the section in that book comparing encryption modes as if they're apples-to-apples and not even mentioning counter is one of the most boneheaded parts of the entire thing.

    If you assume counter mode, then implementing authentication first really is a completely separate layer, with code inside it nicely modularized out for the higher layer which encrypts everything. The whole thing is an order of magnitude prettier than it would be with a mode which required padding though, and it wouldn't be all *that* bad to switch to encrypt then authenticate, so perhaps the mode is a much bigger issue. I'm pretty sure that authenticate first wins in terms of lines of code, but I will happily do best practice things which I don't believe make any differences, such as OAEP+ instead of OAEP, as long as I don't think they're causing any problems, and they don't make me vomit, like XML does. (Yes I know security people these days recommend against XML, but I avoided it even for non-security applications back when I took a huge amount of heat for being not 'standards-compliant'. It's also still recommended by Practical Cryptography.)

  10. Bram, you are right to be concerned that a clear-text mac tag might allow the adversary to guess what the message was. However,in the encrypt then authenticated model, the mac tag is computed over the encrypted data. The attacker already has this data, so they don't need t guess about it. Since the MAC only ever had the encrypted data, the tag it generates can't leak anything the ciphertext it self couldn't.

    The problem you describe actually applies to MAC-AND-Authenticate systems where the cipher-text is
    enc(m)||mac(m). This is hugely problematic and should never be used.

  11. I don't get the big emphasis on the value of e. Yes, small values are faster for public key operations. But isn't the bottleneck in most servers the private key operation? AFAIK the private key operation is much slower than the public key operation, and isn't faster for small values of e.

  12. Could someone please explain Bram's advice #3? How does a recipient from a ciphertext of CTR mode know which key to use? Is it expected some OOB mechanism is used? This seems impractical except in interactive scenarios.

Comments are closed.