Friday, October 26, 2012

Attack of the week: Cross-VM side-channel attacks

It's been a busy week for crypto flaws. So busy in fact, that I'm totally overwhelmed and paralyzed trying to write about them. It's made blogging almost impossible.

So let's just blast through it, then we can get to the subject I actually want to talk about.

First, at the start of the week we received news that many Android applications and a few (critical!) Java libraries simply don't validate certificates on TLS/SSL connections. This is disastrous, embarrassing and stupid, since lack of certificate verification makes TLS trivially vulnerable to man-in-the-middle attacks. I was going to say something shrill and over-the-top about all of this, but turns out that Threatpost has already totally beaten me there with this synopsis:
The death knell for SSL is getting louder.
Yes indeed, Threatpost. Thank you for making me feel better.

In other news, we learned that major email providers -- who should remain nameless, but are actually Microsoft, Google, Apple, Yahoo and everyone else -- have been competing to see who can deploy the shortest RSA key for DomainKeys Identified Mail (DKIM). I'm told that Yahoo was ahead with 384 bits, but Microsoft had a research team working on a 22-bit key and Apple had abandoned keys altogether, opting for a simple note from Tim Cook asserting that any invalid email messages were probably the recipient's fault. (Ba-dum-tschch.)

So that's the headline news, and I don't want to write about any of it.

What I do want to write about is a result that's gotten a lot less attention -- mostly because it's subtle, falls into the category of 'things we thought of knew about, but didn't explore' and because it involves Hidden Markov models -- which are to tech reporters as raw garlic and silver are to vampires.

This new result is by Zhang, Juels, Reiter and Ristenpart, and it appeared in the ACM CCS conference just a few weeks ago, and it deals with something very relevant to the way we build modern systems. Specifically, if we're going to go and stick all of cryptographic services in cloud-based VMs, how secure can we possibly expect them to be?

The answer is: unfortunately, not very. To get into the details I'm going to use the standard 'fun' question/answer format I usually save for these kinds of attacks.
Why would I put my cryptography in a VM anyway?
In case you missed it, the cloud is our future. The days of running our own cranky hardware and dealing with problems like power-supply faults are long gone. If you're deploying a new service, there's a good chance it'll be at least partly cloud-based. There's a decent chance it will be entirely cloud-based.

Take Instagram, for instance. Their entire $1bn service runs on a few hundred cleverly-managed EC2 instances. While Instagram itself isn't exactly a security product, they do use TLS and SSH (presumably on the instances themselves), and this implies public key crypto and the use of secret keys.

Now this shouldn't be a problem, but VM instances often share physical hardware with other instances, and since EC2 is a public service, those co-resident VMs may not be entirely friendly. The major threat here is, of course, software vulnerabilities -- things that can let an attacker break out of one VM and into another. But even if you perfect the software, there's another more insidious threat: namely, that the attacker VM instance could be able to run a side-channel attack on the co-resident VM.

This threat has long been discussed, and security people generally agree that it's a concern. But actually implementing such an attack has proven surprisingly difficult. This is because real hypervisors put a lot of fluff between the attacking process and the bare metal of the server. Different VMs often run on different cores. Moreover, since each VM has an entire OS running inside of it, there's tons of noise.

In fact, there's been plenty of reason to wonder if these attacks are simply the product of security researchers' fevered imaginations, or if something we need to worry about. What Zhang, Juels, Reiter and Ristenpart tell us is: yes, we should worry. And oh boy, do they do it in a nifty way.
So what is it and how does it work?
The new result focuses specifically on the Xen Hypervisor, which is the one actually used by services like Amazon EC2. Although the attack was not implemented in EC2 itself, it focuses on similar hardware: multi-core servers with SMT turned off. The threat model assumes that the attacker and victim VM are co-resident on the machine, and that the victim is decrypting an Elgamal ciphertext using libgcrypt v.1.5.0.

Now, Elgamal encryption is a great example for side-channel attacks, since it's implemented by taking a portion of the ciphertext, which we'll call x, and computing x^e mod N, where e is the secret key and N is (typically) a prime number. This exponentiation is implemented via the 'square and multiply' algorithm, shown in the figure below:

Square and multiply algorithm (source: Zhang et al.)
The first thing you notice about square-and-multiply is that its operation depends fundamentally on the bits of the secret key. If the ith bit of e is 1, the steps labeled (M) and (R) are conducted. If that bit is 0, they aren't. The bits of the key results in a distinctive set of computations that can be detected if the attacking VM is able to precisely monitor the hardware state.

Now, side-channel attacks on square-and-multiply are not new. They date back at least to Paul Kocher's observations in the mid-to-late 90s using power and operating time as a channel, and they've been repeatedly optimized as technology advances. More recent attacks have exploited cache misses in a shared processor cache (typical in hyper-threading environments) as a means by which a single process can monitor the execution of another one.

However, while these attacks have worked from one process to another, they've never been applied to the full Xen VM setting. This is a pretty challenging problem for a variety of reasons, including::
  • The difficulty of getting the attacking process to run frequently enough to take precise measurements.
  • The problem that VCPUs can be assigned to different cores, or irrelevant VCPUs can be assigned to the same core.
  • Noisy measurements that give only probabilistic answers about which operations occurred on the  target process.
And so on and so forth. The challenge in this paper is to overcome all that nonsense and still recover useful information from the attacked VM. 
So what's the basic idea?
At a fundamental level, the attack in this paper is quite similar to previous attacks that worked only across processes. The attacking VM first 'primes' the L1 instruction cache by allocating continuous memory pages, then executing a series of instructions designed to load the cache with cache-line-sized blocks it controls.

The attacker then gives up execution and hopes that the target VM will run next on the same core -- and moreover, that the target is in the process of running the square-and-multiply operation. If it is, the target will cause a few cache-line-sized blocks of the attacker's instructions to be evicted from the cache. Which blocks are evicted is highly dependent on the operations that the attacker conducts.

To see what happened, the attacking VM must recover control as quickly as possible. It then 'probes' to see which blocks have been evicted from the cache set. (This is done by executing the same instructions and timing the results. If a given block has been evicted from the cache, execution will result in a cache miss and a measurable delay.) By compiling a list of which blocks were missing, the attacker gains insight into which instructions may have been executed while the target VM was running.

A big challenge for the attacker is the need to regain control quickly. Wait too long and all kinds of things will happen -- the state of the cache won't give any useful information.

Normally Xen doesn't allow VCPUs to rapidly regain control, but there are a few exceptions: Xen gives high priority to Virtual CPUs (VCPUs) that receive an interrupt. The authors were able to exploit this by running a 2-VCPU VM, where the second VCPU's only job is to issue Inter-Processor Interrupts (IPIs) in an effort to get the first VCPU back in control as quickly as possible. Using this approach they were able to get back in the saddle within about 16 microseconds -- an eternity in processing time, but enough to give useful information.
But isn't that data noisy as hell? And fragmented?
Yes. The challenge here is that the attacking VM has no control over where in the computation it will jump in. It could get just a small fragment of the square-and-multiply operation (which is hundreds or thousands operations long), it could jump into the OS kernel, it could even get the wrong VM, thanks to the fact that they can run on any core. Plus the data could be pretty noisy.

The solution to these problems is what's so nifty about this paper. First, they don't just monitor one execution -- they assume that the device is constantly decrypting different ciphertexts, all with the same key. This is a pretty reasonable assumption for something like an SSL web server.

Next, they use machine learning techniques to identify which of the many possible instruction sequences are associated with particular cache measurements. This requires the researchers to train the algorithm beforehand on the target hardware, having the target VCPU conduct of square, multiply and modular reduce calls in order build a training model. During the attack, the data was further processed using a Hidden Markov Model to eliminate errors and bogus measurements from non-cryptographic processes.

Even after all this work, the attacker winds up with thousands of fragments, some of which contain errors or low-confidence results. These can be compared against each other to reduce errors, then stitched together to recover the key itself. Fortunately this is a problem that's been solved in many other domains (most famously: DNA sequencing), and the techniques used here are quite similar.

A good way to illustrate the process is to present a totally made-up example, in which six fragments are reconstructed to form a single spanning sequence:

S1: SRSRMRSRMRSRSRSMR
S2:        MRSRSRSRMR**SRMRSR

S3:      SRMRSRSR                    
S4:                      MRSRSRSR**SRMRSR
S5:                                  MR*RSRMRSRMRSR
S6:                MRSRSRMRSRSRSRMR
    ------------------------------------------------
    SRSRMRSRMRSRSRSMRSRSRMRSRSRSRMRSRMRSRSRMRSRMRSR

This is obviously a huge simplification of a very neat (and complex) process that's very well described in the paper. And if all this technical stuff is too much for you: it's basically like the scene in Argo where the little kids reconstructed the shredded photos of the embassy workers. Just without the kids. Or the photos.
So does it actually work?
It would be a hell of a bad paper if it didn't.

With everything in place, the researchers applied their attack against a 4096-bit Elgamal public key, which (due to an optimization in libgcrypt) actually has a 457-bit private key e. After several hours of data collection, they were able to obtain about 1000 key-related fragments, of which 330 turned out to be long enough to be useful for key reconstruction. These allowed the attackers to reconstruct the full key with only a few missing bits, and those they were able to guess using brute force.

And that, as they say, is the ballgame.
Left: Fragment size vs. number of fragments recovered, Right: sequence accuracy as a function of fragments in a batch. (source: Zhang et al.)
Oh my god, we're all going to die.
I would note that this isn't actually a question. But before you start freaking out and pulling down your cloud VMs, a few points of order.

First: there's a reason these researchers did this with libgcrypt and Elgamal, and not, say OpenSSL and RSA (which would be a whole lot more useful). That's because libgcrypt's Elgamal implementation is the cryptographic equivalent of a 1984 Stanley lawnmower engine -- it uses textbook square-and-multiply with no ugly optimizations to get in the way. OpenSSL RSA decryption, on the other hand, is more like a 2012 Audi turbo-diesel: it uses windowing and CRT and blinding and two different types of multiplication, all of which make it a real pain in the butt to deal with.

Secondly, this attack requires a perfect set of conditions. As proposed it works only works with two VMs, and requires specific training on the target hardware. This doesn't mean that the attack isn't viable (especially since cloud services probably do use lots of identical hardware), but it does mean that messiness -- the kind you get in real cloud deployments -- is going to be more of an obstacle than it seems.

One last thing worth mentioning is that before you can attack a VM, you have to get your attack VM onto the same physical hardware with your target. This seems like a pretty big challenge. Unfortunately, some slightly older research indicates that this is actually very feasible in existing cloud deployments. In fact, for only a few dollars, researchers were able to co-locate themselves with a given target VM with about 40% probability.

In the short term, you certainly shouldn't panic about this, especially given how elaborate the attack is. But it does indicate that we should be thinking very hard about side-channel attacks, and considering how we can harden our systems and VMMs to be sure they aren't going to happen to us.

Tuesday, October 16, 2012

The crypto dream

(Credit: Matt Blaze/cc)
Arvind Narayanan just gave a fascinating talk at Princeton's Center for Information Technology Policy entitled 'What Happened to the Crypto Dream?'. That link is to the video, which unfortunately you'll actually have to watch -- I have yet to find a transcript.

From the moment I heard the title of Arvind's talk I was interested, since it asks an important question that I wanted a chance to answer. Specifically: what happened to the golden future of crypto? You know, the future that folks like Phil Zimmermann offered us -- the one that would have powered the utopias of Neal Stephenson and the dystopias of William Gibson (or do I have that backwards?) This was the future where cryptography fundamentally altered the nature of society and communications and set us free in new and exciting ways.

That future never quite arrived. Oh, mind you, the technology did -- right on schedule. We're living in a world where it's possible to visit Tokyo without ever leaving your bed, and where governments go to war with software rather than tanks. Yet in some ways the real future is more Stephen King than William Gibson. The plane landed; nobody was on board.

So what did happen to the crypto dream?

Arvind gives us a bunch of great answers, and for the most part I agree with him. But we differ in a few places too. Most importantly, Arvind is a Princeton scholar who has been known to toss out terms like 'technological determinism'Me, I'm just an engineer. What I want to know is: where did we screw it all up? And how do we make it right?

The premise, explained

Once upon a time most important human transactions were done face-to-face, and in those transactions we enjoyed at least the promise that our communications would be private. Then everything changed. First came the telephones and telegraphs, and then computer networks. As our friends and colleagues spread farther apart geographically, we eagerly moved our personal communications to these new electronic networks. Networks that, for all their many blessings, are anything but private.

People affected by telephonic surveillance (1998-2011). Source: ACLU
Some people didn't like this. They pointed out that our new electronic communications were a double-edge sword, and were costing us protections that our ancestors had fought for. And a very few decided to do something about it. If technological advances could damage our privacy, they reasoned, then perhaps the same advances could help us gain it back.

Technically, the dream was born from the confluence of three separate technologies. The first was the PC, which brought computing into our living room. The second was the sudden and widespread availability of computer networking: first BBSes, then WANs like GTE Telenet, and then the Internet. Most critically, the dream was fueled by the coincidental rise of scientific, industrial cryptography, starting with the publication of the Data Encryption Standard and continuing through the development of technologies like public-key cryptography.

By 1990s, the conditions were in place for a privacy renaissance. For the first time in history, the average person had access to encryption technology that was light years beyond what most governments had known before. The flagbearer of this revolution was Philip Zimmermann and his Pretty Good Privacy (PGP), which brought strong encryption to millions. Sure, by modern standards PGP 1.0 was a terrible flaming piece of crap. But it was a miraculous piece of crap. And it quickly got better. If we just hung in there, the dream told us, the future would bring us further miracles, things like perfect cryptographic anonymity and untraceable electronic cash.

It's worth pointing out that the 'dream' owes a lot of its power to government itself. Congress and the NSA boosted it by doing what they do best -- freaking out. This was the era of export regulations and 40-bit keys and Clipper chips and proposals for mandatory backdoors in all crypto software. Nothing says 'clueless old dinosaurs' like the image of Phil Zimmermann being searched at the border -- for copies of a program you could download off the Internet!

And so we all held a million key signing parties and overlooked a few glaring problems with the software we were using. After all, these would be resolved. Once we convinced the masses to come along with us, the future would be encrypted and paid for with e-Cash spent via untraceable electronic networks on software that would encrypt itself when you were done using it. The world would never be the same.

Obviously none of this actually happened.

If you sent an email today -- or texted, or made a phone call -- chances are that your communication was just as insecure as it would have been in 1990. Maybe less so. It probably went through a large service provider who snarfed up the cleartext, stuffed it into an advertising algorithm, then dropped it into a long term data store where it will reside for the next three years. It's hard to be private under these circumstances.

Cryptography is still everywhere, but ufortunately it's grown up and lost its ideals. I don't remember the last time I bothered to send someone a GPG email -- and do people actually have key signing parties anymore?

There are a few amazingly bright spots in this dull landscape -- I'll get to them in due course -- but for the most part crypto just hasn't lived up to its privacy billing. The question is why? What went so terribly wrong? In the rest of this post I'll try to give a few of my answers to this question.
Problem #1: Crypto software is too damned hard to use.
Cryptographers are good at cryptography. Software developers are good at writing code. Very few of etiher camp are good at making things easy to use. In fact, usability is a surprisingly hard nut to crack across all areas of software design, since it's one of a few places where 99% is just not good enough. This is why Apple and Samsung sell a zillion phones every year, and it's why the 'year of Linux on the Desktop' always seems to be a few years away.

Security products are without a doubt the worst products for usability, mostly because your user is also the enemy. If your user can't work a smartphone, she might not be able to make calls. But if she screws up with a security product, she could get pwned.

Back in 1999 -- in one of the few usability studies we have in this area -- Alma Whitten and J.D. Tygar sat down and tried to get non-experts to use PGP, a program that experts generally thought of as being highly user friendly. Needless to say the results were not impressive. And as fun as it is to chuckle at the people involved (like the guy who revoked his key and left the revocation message on his hard drive) the participants weren't idiots. They were making the same sort of mistakes everyone makes with software, just with potentially more serious consequences.

And no, this isn't just a software design problem. Even if you're a wizard with interfaces, key management turns out to be just plain hard. And worse, your brililant idea for making it easier will probably also make you more vulnerable. Where products have 'succeeded' in marketing end-to-end encryption, they've usually done so by making radical compromises that undermine the purpose of the entire exercise.

Think Hushmail, where the crypto client was delivered from a (potentially) untrustworthy server. Or S/MIME email certificates which are typically generated in a way that could expose private key to the CA. And of course, there's Skype, which operates their own user-friendly CAs, a CA that can potentially pwn you in a heartbeat.
Problem #2: Snake-oil cryptography has cluttered the space.
As Arvind points out, most people don't really understand the limitations of cryptography. This goes for people who rely on it for their business (can't tell you how many times I've explained this stuff to DRM vendors.) It goes double for the average user.

The problem is that when cryptography does get used, it's often applied in dangerous, stupid and pointless ways. And yet people don't know this. So bad products get equal (or greater) billing than good ones, and the market lacks the necessary information to provide a sorting function. This is a mess, since cryptography -- when treated as a cure-all with magical properties -- can actually make us less secure than we might otherwise be.

Take VPN services, for example. These propose to secure you from all kinds of threats, up to and including totalitarian governments. But the vast majority of commercial VPN providers do terribly insecure things, like use a fixed shared-secret across all users. Data encryption systems are another big offender. These are largely purchased to satisfy regulatory requirements, and buying one can get you off the hook for all kinds of bad behavior: regulations often excuse breaches as long as you encrypt your data -- in some way -- before you leave it in a taxi. The details are often overlooked.

With so much weak cryptography out there, it's awfully hard to distinguish a good system. Moreover, the good system will probably be harder to use. How do you convince people that there's a difference?
Problem #3: You can't make money selling privacy.
As I'll explain in a minute this one isn't entirely true. Yet of all the answers in this post I tend to believe that it's also the one with the most explanitory power.

Here's the thing: developing cryptographic technology isn't cheap, and it isn't fast. It takes time, love, and a community of dedicated developers. But more importantly, it requires subject matter experts. These people often have families and kids, and kids can't eat dreams. This means you need a way to pay them. (The parents, that is. You can usually exploit the kids as free labor and call it an 'internship'.)

Across the board, commercialization of privacy technologies has been something of a bust. David Chaum gave it a shot with his anonymous electronic cash company. Didn't work. Hushmail had a good run. These guys are giving it a shot right now -- and I wish them enormous luck. But I'm not sure how they're going to make people pay for it.

In fact, when you look at the most successful privacy technologies -- things like PGP or Tor or Bitcoin  -- you notice that these are the exceptions that prove the rule. Tor was developed with US military funding and continues to exist thanks to generous NGO and government donations. PGP was a labor of love. Bitcoin is... well, I mean, nobody really understands what Bitcoin is. But it's unique and not likely to be repeated.

I could think of at least two privacy technologies that would be wonderful to have right now, and yet implementing them would be impossible without millions in seed funding. And where would you recover that money? I can't quite figure it out. Maybe Kickstarter is the answer to this sort of thing, but I'll have to let someone else prove it to me.
Problem #4: It doesn't matter anyway. You're using software, and you're screwed.
Some of the best days of the crypto dream were spent fighting government agencies that legitimately believed that crypto software would render them powerless. We generally pat ourselves on the back for 'winning' this fight (although in point of fact, export regulations still exist). But it's more accurate to say that governments decided to walk away.

With hindsight it's pretty obvious that they got the better end of the deal. It's now legal to obtain strong crypto software, but the proportion of (non-criminal) people who actually do this is quite small. Worse, governments have a trump card that can circumvent the best cryptographic algorithm. No, it's not a giant machine that can crack AES. It's the fact that you're implementing the damned thing in software. And software vulnerabilities will overcome all but the most paranoid users, provided that the number of people worth tracking is small enough.

Arvind points this out in his talk, and refers to a wonderful talk by Jonathan Zittrain called 'The End of Crypto' -- in which Jonathan points out how serious the problem is. Moreover, he notes that we're increasingly losing control of our devices (thanks to the walled garden model), and argues that such control is a pre-condition for secure communications  This may be true, but let me play devil's advocate: the following chart shows a price list for 0days in commercial software. You tell me which ones the government has the hardest time breaking into.

Estimated price list for 0-days in various software products. (Source: Forbes)

Whatever the details, it seems increasingly unlikely that we're going to live the dream while using the software we use today. And sadly nobody seems to have much of an answer for that.
Problem #5: The whole premise of this post is wrong -- the dream is alive!
Of course, another possibility is that the whole concept is just mistaken. Maybe the dream did arrive and we were all just looking the other way.

Sure, GPG adoption may be negligible, and yes, most crypto products are a disaster. Yet with a few clicks I can get on a user-friendly (and secure!) anonymous communications network, where my web connections will be routed via an onion of encrypted tunnels to a machine on the other side of the planet. Once there I can pay for things using a pseudonymous electronic cash service that bases its currency on nothing more than the price of a GPU.

If secure communications is what I'm after, I can communicate through OTR or RedPhone or one of a dozen encrypted chat programs that've recently arrived on the scene. And as long as I'm not stupid, there's surprisingly little that anyone can do about any of it.

In conclusion

This has been a very non-technical post, and that's ok -- you don't have to get deeply technical in order to answer this particular question. (In fact, this one place I slightly disagree with Arvind, who also brings up the efficiency of certain advanced technologies like Secure Multiparty Computation. I truly don't think this is a story about efficiency, because we have lots of efficient privacy protocols, and people still don't use them.)

What I do know is that there's so much we can do now, and there are so many promising technologies that have now reached maturity and are begging to be deployed. These include better techniques for anonymizing networks, adding real privacy to currencies like Bitcoin, and providing user authentication that actually works. The crypto dream can still live. Maybe all we need is a new generation of people to dream it.

Tuesday, October 9, 2012

So you want to use an alternative cipher...

Not this cipher.
Once in a while I run into discussions that hinge on the following dubious proposition: that AES is bad and we need to replace it. These discussions always make me leery, since they begin with facts not in evidence, and rarely inspire any confidence that the solution is going to be any better than the 'problem'.

In fact, this whole point of view is so rarified that I've debated whether to even write this post, since my opinion is that AES is the last place your system is going to break down -- and you should be focusing your attention on fixing all the other broken things first.

Moreover, the simple advice on AES mirrors the ancient wisdom about IBM: nobody ever got fired for choosing it. Not only is AES is the NIST standard (certified in FIPS 140 and NSA's Suite B), but there are hundreds of solid implementations to choose from. If that's not enough for you, many processors now support AES operations natively, meaning that your application can now offload most of the work to hardware without the help of an expensive co-processor.

So why not just stick with AES? People who have these discussions generally give a variety of reasons, some of which are more valid than others. First, there's what I call the 'slight paranoia' viewpoint, which holds that AES has been around for a long time and could (soon/someday) fail. In just the last few years we've seen a few impractical attacks on the construction, which could be beginning of a trend. Or maybe not.

The second (less paranoid) objection is that AES is somewhat troublesome to implement in software. To make it run fast you have to expand the key and pre-compute a series of tables -- all of which increases key setup time and potentially makes you vulnerable to cache timing attacks. Good implementations take this into account, but even the best ones aren't perfect. And in a few cases your performance constraints are so tight that AES just isn't fast enough.

Now I'm not saying that any of these (except possibly for the last reason) are good reasons to ditch AES. In fact, ditching AES would be the opposite of my recommendation. But let's say that you've already made the decision to explore more recent, modern alternatives. What are they? Should you trust them? And most importantly: what will they buy you?

Salsa20

Based on an informal (and totally unscientific poll), the consensus among advanced AES-switchers is that Salsa20 has a lot going for it. This is mostly due to Salsa20's performance characteristics, but also because people are growing increasingly confident in its security.

Now, just for the record, Salsa20 is a stream cipher, while AES is a block cipher. This distinction is important: stream ciphers produce a string of pseudo-random output bits which are XORed with the message to be encrypted. Block ciphers can be configured to do the same thing (e.g., by running them in CTR or OFB mode), but they can also process plaintext blocks in other ways.

One important difference -- and a reason implementers have historically preferred block ciphers -- is that many block cipher modes allow you to randomly access just a portion of an encrypted message, without wasting time decrypting the whole thing. A second advantage is that block ciphers can be used to construct both encryption and message authentication (MACs), which makes them a wonderful building block for constructing authenticated encryption modes.

Salsa20 takes care of the first issue by providing a means to randomly access any block of the generated keystream. Each invocation of the Salsa20 keystream generator takes a key, a nonce (serving as an IV), and a block position in the stream. It then outputs the 512-bit block corresponding to that position. This makes it easy to, for example, seek to the last block of a multi-gigabyte file.

It's also possible to use Salsa20 in an authenticated encryption mode -- but it's not trivial. And to do this the cipher must be composed with a polynomial-based MAC like Dan Bernstein's poly1305. I won't lie and say that this usage is standardized and well-defined -- certainly not in the way that, say, EAX or GCM modes are with AES.

On the positive side, Salsa20 is fast in software. The key setup time is negligible and it has one of the lowest cycles-per-byte counts of any reputable ciphers. Current figures show Salsa20/12 to be 2-3x as fast as a heavily optimized AES-CTR, and maybe even faster for the implementation that you would actually use (of course, hardware implementations of AES could make up for a lot of this advantage).

The basic limitation a cipher like Salsa20 is the same as with any non-standard cipher -- no matter how good the design, you're using an alternative. Alternatives don't get the same attention that standards do. To its credit, Salsa20 has received a decent amount of academic cryptanalysis, most of it positive, but still nothing compared to AES.

Threefish

Threefish is another recent contribution to the 'alternative ciphers' canon, and hails from Schneier, Ferguson, Lucks, Whiting, Bellare, Kohno, Callas, and Walker. (With a list of authors this long, how could it not be excellent?)

Threefish's distinction is that it's one of a relatively small number of ciphers that recently passed through  (most of) a NIST competition. The dubious aspect is that the competition wasn't a competition for designing a cipher. Rather, Threefish was submitted as a building block for the SHA3 candidate Skein, which made it to the final round of the competition but was ultimately passed over in favor of Keccak.

Threefish is a wide-block cipher that can be configured to operate on 256, 512 or 1024-bit blocks. Right off the bat this seems useful for applications like disk encryption, where ciphers are typically used to encrypt large blocks of material (sometimes in 'wide block cipher modes' like CMC or EME). But it seems like a nice choice for security and performance reasons.

While Threefish has seen some cryptanalysis, this is still relatively limited (a few major results). None of these results are 'successful', which is noteworthy and confidence-inspiring. But even with this work, it's hard to say where the cipher stands in relation to AES.

The AES could-have-beens: Twofish, Serpent, etc.

AES seems like it's always been AES, so it's easy to forget that just a few years ago it was called Rijndael and there were four other finalists that could just as easily have taken the title. Those finalists all received a lot of cryptanalytic attention, and none of them went away when AES was selected.

The two ciphers I occasionally run into are Bruce Schneier's Twofish and Anderson/Biham/Knudsen's Serpent. Both have decent performance in software, and both have stood up relatively well to cryptanalysis. On the flipside, neither of the two ciphers has received very much in the way of analysis since the AES competition ended (Twofish's most recent significant result was in 2000).

Any of these ciphers could be a worthy alternative to AES if you were desperate, but I wouldn't go out of my way to use one.

In conclusion

I realize none of the above actually tells you which AES alternative to use, and that's mostly because I don't want to legitimize the question. Unless your adversary is the NSA or you have some serious performance constraints that AES can't satisfy, my recommendation is to stick with AES -- it's the one standard cipher that nobody gets fired for using.

But if you are in the latter category (meaning, you have performance constraints) I'm pretty impressed by Salsa20/12's performance in software, provided you have a good strategy for providing authentication. Even better, while Salsa20 is not standardized by NIST, standardization efforts are ongoing in the eCRYPT eStream project. The result could be increasing adoption of this cipher.

If your concern is with the security of AES, I have less advice to give you. The beautiful thing about AES is that it's so widely studied and used that we'll almost certainly have plenty of notice should the cipher really start to fail. That is, provided the people attacking it are doing so in the public, academic literature. (If your enemy is the NSA I'm just not sure what to tell you. Just run.)

That still leaves a class of folks who worry about encrypting things for the long-haul. For these folks the best I can propose is to securely combine AES with another well-studied cipher like Salsa20, Threefish or one of the AES finalists. This will cost you -- and there's no guarantee that both ciphers will stand over the long term. But the probability of two significant breaks seems lower than the probability of one.

Wednesday, October 3, 2012

SHA3 is over. Long live SHA3!

The Keccak 'sponge'
(Wikipedia/CC)
The news today is all about hash functions, and specifically, the new hash function we're all going to be calling SHA3. After six long years NIST has finally announced the winner of the SHA3 competition: it's Keccak.

This isn't a total shock: the scuttlebut is that Keccak got most of the 'hums' in NIST's informal 'hum if you like this hash function' poll at the SHA3 conference this year, followed closely by BLAKE. (Yes, seriously, this is how we do things.) Obviously hums aren't the only criteria NIST uses to pick a standard, but they do us an idea where the community stands on things. Moreover, Keccak sports a radically different design than SHA2 and its competitors -- something that appears to have given it a strong competitive advantage.

Now please let me stipulate (again!) that I'm no expert in hash function design. And sadly, many of the leading hash function experts had 'losing' submissions of their own, which means they're still in the first stages of the grieving process -- the one where they say nice things about the winner while secretly plotting its demise. (I'm only kidding. Sort of.)

So while I'm no expert in the deep internals of the design, I can sum up a few things that others -- both expert and non-expert -- are saying about the process and the decision.

To begin with, nobody really thinks that NIST blew it here. All of the finalists were strong designs, and each provides a substantial security margin over the previous state of the art. Each was evaluated under a base standard of 'indifferentiability from a random oracle', and each possesses a security proof to that effect (for more on what this means, see this post.) This is a great turn for NIST, which has been been iffy on provable security in the past. Concretely, this means no more length-extension attacks as in SHA1/2, though admittedly some SHA2 variants already satisfy this requirement.

Moreover, I hear nobody complaining about the security of Keccak. The major gripes appear to be centered around its performance in software, a concern that really does have some grounding. But more on that in a second.

So what do we know about Keccak/SHA3? This list is by no means comprehensive (nor does it represent my own thoughts alone), but it will hopefully provide a few insights:
  • The name 'Keccak' comes from 'Kecak', a Balinese dance. (h/t JP Aumasson)
  • Keccak differs from the other SHA3 finalists in that it uses a 'sponge' construction. Where other designs rely on a 'compression function' which is extended to larger domains, Keccak uses a non-compressing function to absorb and then 'squeeze out' a short digest. It's unclear whether this design is radically better or worse than the existing approaches. But it is different. NIST clearly felt that in this case, different is better.
  • Keccak doesn't blaze in software. In fact, Keccak/512 is estimated to be as much as half as fast in software as SHA-512. This fantastic table of tables sums up the results across all of the finalists. Needless to say they are highly processor dependent. I'm already seeing the first signs of pushback from security developers on mailing lists.
  • Keccak is quite speedy on hardware, something NIST cited in its decision. It also looks like it will be very GPU-friendly.
  • 'Attacks' on Keccak include a full distinguisher that applies to all 24 rounds of the hash function. But don't worry, the numbers in this attack are completely ridiculous, and this is in no way an actual attack.
  • Keccak includes relatively few non-linear bit operations per CPU instruction. The round function has algebraic degree 2, which facilitates some of the theoretical 'zero-sum' attacks on reduced-round Keccak (due to Aumasson and Meier).
  • No, it is not pronounced 'Ketchup', despite my best attempts to convince people of this. (Though there actually is a reduced-round variant called Keccup.)
My contribution to the SHA3 competition.
The full Keccak specification (including pseudocode and 'readable' C code) can be found here. A series of implementations exist for the SUPERCOP project. Unfortunately I know of no public or commercial implementations, at least not on major cryptographic libraries. I expect that to change quickly, and I also expect a whole bunch of further optimizations -- particularly on the GPU side.

It's not clear how the adoption of SHA3 will proceed. Right now there are no significant clouds on the horizon for the SHA2 series, and for the moment it seems like many people will continue to use those -- at least, those who aren't still using SHA1 or (god help us) MD5. I'm not sure what my recommendation is on switching, except that nobody's in a rush. If you absolutely must worry about long-term security, my recommendation is to consider using SHA2 and SHA3 in a secure composition. But I doubt anyone reading this blog needs to be that paranoid.

Even if you were rooting for another function, there is one major bright side to the selection of Keccak. Now that the SHA3 competition is behind us, cryptographers will have time to get back to the real task: putting holes in SHA1 and SHA2. I very much look forward to the results.