About a year ago I got into a discussion on Twitter with a couple of other cryptographers. The subject: why do so many software developers use lazy cryptography?

The instigation for this discussion was actually a piece of malware – a popular, widespread botnet that forgot to use digital signatures to sign its control messages. Though I know it’s wrong to complain about broken malware, at the same time: these idiots had money on the line! If they couldn’t bother to properly secure their software, how can we possibly expect, say, Evernote to get it right?

And indeed, the more you look at software, the more you realize this problem cuts across all types of software development. People suddenly find themselves in a position where they could really use strong cryptography (be it ciphers, signatures, collision-resistant hash functions) but instead of buckling down and doing it right, the developers inevitably opt for something terrible.

Terrible can mean a lot of things in this context. Mostly it means using no crypto at all. Sometimes it’s grabbing an RC4 implementation off of Wikipedia. Or copying something from Applied Cryptography. Or in a few borderline cases, grabbing the MS Crypto API and thus getting stuck with a basket of legacy algorithms thanks to ancient Windows XP compatibility issues.

Anyway, the problem — it seemed to us at the time — wasn’t just a lack of knowledge, or even  an absence of good crypto libraries. Rather, the problem was that you had to use libraries. If your developer has hit the point where s/he’s willing to copy and paste RC4 from Wikipedia, you’re already in a kind of Fifth Dimension of laziness. Nobody’s going to pull in NaCl or OpenSSL just to encrypt one little blob of text.

The output of all this thinking was a grand vision. If we couldn’t bring the developers to the crypto, we would bring the crypto to the developers. Specifically, we’d develop an easy to use crypto library small enough that you could copy and paste it right into your code. Since we were already using Twitter, we had a metric for complexity right there: the entire library would have to fit in a small number of Tweets. Jean-Philippe Aumasson (developer of the SHA3 finalist BLAKE) even laid the cornerstone by developing TweetCipher, an entirely new authenticated cipher that fits in about six Tweets.

It was an exciting time. We dreamed big! We made big plans! Then – this being Twitter – we got distracted by something shiny and forgot all about it.

Fortunately others have risen to carry on our great work. Tonight marks the publication of TweetNaCl, a re-implementation of the NaCl crypto library that fits in exactly 100 Tweets. TweetNaCl is brought to you by the same people who wrote NaCl, which means that it’s probably quite good. More to the point, it’s actually compatible with NaCl, which means it uses standard and well-vetted algorithms. (Here’s the code in indented, non-Tweet form.)

Would I recommend you use TweetNaCl? Well, no. I’d recommend you use the real NaCl. But if you do find yourself with a crypto problem, and it looks like you’re just about to copy code from Wikipedia…. Then maybe TweetNaCl could be just the thing.

The anatomy of a bad idea

I’ve been running a fever all day and am only coming back to my senses now, thanks to a huge (and possibly toxic) dose of Motrin. Sometimes when you’re under the weather, things kind of blur together. This is probably why I can only recall two things I read this afternoon.

The first was a news story concerning a series of tests in which the US government dropped nuclear bombs on a variety of canned beverages, apparently to see how well they held up (but possibly just because they had some nukes). The second was the W3C Web Cryptography API, a draft Javascript crypto API that will ultimately power a lot of the crypto we use on the web.

Even in my addled state, I can tell that only one of these things is a good idea.

(In case you’re still guessing, here’s a hint: we’ll still need beer after the bombs fall.)

Before I go further, let me be clear: I think the web absolutely should have a secure browser-based crypto API! My problem is that I can’t bear to watch the W3C screw it up. Because screw it up they absolutely will. The W3C, bless their sweet little hearts, has left a trail of wreckage behind every crypto project they’ve ever taken on. And these were small projects!

The Web Crypto API is far more ambitious, which means it will blow up that much more spectacularly when it goes. And unfortunately, the W3C contributors seem immune to all attempts to disuade them from this course.

So what is it?

The Web Cryptography API (henceforth WC-API) is a perfectly lovely idea: to build a standard Javascript API for doing crypto on the web. In theory it’ll be supported by lots of browsers and will include all the logic you need to manage keys securely and perform common cryptographic operations. Things like encryption, signing, etc.

This can’t be deployed soon enough — if it’s done right. The problem with current Javascript implementations is that they’re really bad at managing keys securely, and the crypto implementations are frequently ad-hoc, slow, and vulnerable to side-channel attacks. A common API should allow us to do much, much better.

But at the same time we need to recognize that cryptography is hard (seriously: this should be the name of my blog.) Moreover, Javascript development is — no offense — the last place we want people making hard cryptographic decisions. Decisions like, for example, ‘should I use the obsolete and very broken PKCS#1v1.5 encryption padding scheme in my protocol?’ These are things that very few people need to think about, and most people who are thinking about them are going to make the wrong choice.

Two visions

This is not a new argument. For two competing visions of how a cryptography library should work, consider two existing libraries: OpenSSL and NaCl. OpenSSL is the grand old lady of cryptography, ‘beloved’ by millions. Actually OpenSSL isn’t a crypto library except by accident — it’s an SSL library that happens to expose a bunch of underlying crypto routines. NaCl is much newer and basically tries to repair the damage wrought by OpenSSL.

If we must resort to analogies, let’s try these: OpenSSL is the space shuttle of crypto libraries. It will get you to space, provided you have a team of people to push the ten thousand buttons required to do so. NaCl is more like an elevator — you just press a button and it takes you there. No frills or options.

I like elevators.

Now obviously there are different reasons to take the space shuttle. Sometimes you need to go to places that aren’t supported by the buttons on an elevator. But you’d better know exactly what you’re doing, because making a mistake on the space shuttle is not something you get to do twice.

More concretely, NaCl takes the position that most users don’t need their crypto to be backwards compatible with other systems, and instead provides a simplified abstraction in the form of the ‘crypto_box‘ operation. This abstracts all the messy details of public and secret key cryptography into a single secure function call. This approach may seem a bit extreme, but it’s also sufficient for a huge amount of what most people need to get done when they’re encrypting things.

WC-API does not adhere to the NaCl vision. Instead, it follows in the OpenSSL tradition. Here the basic unit is the crypto primitive, which can be selected from a menu provided by the designers. This menu includes some pretty solid primitives like RSA-OAEP, ECDH and AES-GCM, but it also includes ridiculous stuff that should be banned from the universe — things like the aforementioned RSA-PKCS#1v1.5 (signature and encryption) padding as well as several unauthenticated symmetric modes of operation. Worse, some of the bad primitives are also the recommended ones.

The user has to pick the right ones, generate the right keys, and hopefully (god bless) tie them all together without running afoul of the various practical attacks that they’ll be victim to when they make the wrong choice.

Some will do this. Some will not.

But maybe you’re just being too sensitive…?

This is possible, but gosh, don’t take my word for it. If you think I’m just blowing smoke, please at least listen to IETF Crypto Forum Research Group representatives Kenny Paterson, Tibor Jager and Juraj Somorovsky when they say exactly the same things as me but in scarier language:

We noted that the standard contains some legacy crypto algorithms,
which have well-known serious weaknesses but are (unfortunately) still
widely used. These weaknesses frequently lead to attacks on systems
that naively use these algorithms.

For instance, several works [Ble98,BFKST12,JSS12, …] demonstrate the
vulnerability of RSA PKCS#1 v1.5 to variants of Bleichenbacher’s
attack in various (practical and widely-used) applications. Using
PKCS#1 v1.5 in a secure way is highly non-trivial and often
impossible, in particular in Web-based applications
. Moreover, it is
known that the pure availability of PKCS#1 v1.5 encryption may also
spoil the security of other algorithms, like RSA-OAEP or
RSASSA-PKCS1-v1_5 signatures.

Similarly, unauthenticated block-cipher modes of operation, like CTR
and CBC, have in the past been shown to enable “padding-oracle”-like
, for instance in [Vau02,DR’10,DR’11,JS’11,AP’12, …]. These
modes of operation should not be used without additional security
measures. An appropiate measure is a message authentication code,
which should be computed over the ciphertext and must be verified by
the receiver.

Actually we would like to recommend that these legacy ciphers are
removed from the standard
. While in some special cases countermeasures
against the known attacks are possible, implementing these is highly
non-trivial and error-prone, thus much harder than implementing new
libraries supporting secure algorithms. Even worse, in many examples
there exists no countermeasure.

What about backwards compatibility?

Yes, I concede that there may occasionally be times when your Javascript needs to decrypt a message that was encrypted by some other, archaic piece of crypto software. And for those times there really is a need to support some of the obsolete crypto schemes provided in the current W3C specification. But “support” is not the same as “provide in your mainline API”.

Unfortunately, several proposals to separate the older algorithms into a different namespace have been shot down, as have proposals to strongly warn users against the bad algorithms. I can’t see any justification for these decisions, except that possibly there is a God and he wants cryptographers/pentesters to stay busy for the next few years.

So what does it all mean?

A while ago on Twitter somebody asked why I spend so much time criticizing things that are old and broken, rather than making things new and shiny. When I finished sputtering, I realized that the answer is simple: I’m lazy.

But that’s ok, because the truth is, most software developers are too. Not in a bad way, but simply in the sense that we want to get from point A to point B without being distracted by a lot of useless garbage. Give us a way to get between those two points and we’ll do it with alacrity. Make us understand that PKCS#1v1.5 is universally insecure unless you use it in a very particular way (that’s documented almost nowhere), and that even this relies on some pretty unusual assumptions — and you’ll tend to find a lot of people just making mistakes.

And in case this is all too doom-and-gloom for you, here’s a parting thought to cheer you up. No matter how bad this API turns out to be, at least we don’t have to drink it.

Update (12/30): A few people have criticized this post on the grounds that I should join the WebCrypto WG rather than complaining about it. Usually the implication is that I’m after blog hits.

The short answer is: if I really wanted blog hits, I wouldn’t be bitching about an obscure web standard! I’d be complaining about Apple. I speak from experience here.

The longer response is that many bright folks in and outside of the WG have made these points (see the CFRG note above). Their input has been respectfully considered and rejected. My impression is that we’re not dealing with a lack of knowledge here, but rather a more fundamental difference in approach. The WG has made an institutional decision to emphasize backwards compatibility over security. This works if you’re designing a standard and hoping to get it adopted quickly. It just doesn’t lead to a good standard.

I think there’s a certain kind of person who has the time and political skills to fix this kind of organizational problem, and I sincerely hope that person gets involved. But sadly, that person is not me.