In memoriam: Tim Hartnell

Last week the students and I went looking for our long-lost GnuRadio USRP in a dusty hardware security lab down the hall. This particular USRP hasn’t been seen in about five years (I suspect it may have been deported with the lab’s previous occupant) so the whole thing was kind of a long shot.

Sometimes best part of a treasure hunt is what you find along the way. The students didn’t find the USRP, but they did uncover a fog machine and laser that someone had tucked under a workbench. This kept them happy ’til we got a whiff of the “fog” it was making. I scored something even better: a mint copy of Tim Hartnell‘s 1985 masterpiece, the Giant Book of Computer Games.

If you’re just a few years younger than me, you might think Games is a book about games. But of course, it literally is games: dozens of all-caps BASIC listings, printed in a font that was probably old when Wargames was new. Each game sits there on the page, pregnant with potential, waiting for a bored 9-year old to tap it into his C64 or Apple ][ and hit “RUN”. (Sadly, this could be a long wait.)

Flipping through Games brings back memories. The Chess AI was a bastard, routinely cheating even if you implemented it properly. And you never implemented anything properly, at least not on the first pass. This was part of the fun. Between typos and the fact that Hartnell apparently coded to his own BASIC standard, the first play usually went like this:

WELCOME TO SNARK HUNT
ENTER 1 FOR SNARK, 2 FOR HUNTER
> 2
YOU CHOSE SNARK
?SYNTAX ERROR AT LINE 3980
READY

You learned debugging fast. When that didn’t work, your last, desperate move was simply to delete the offending lines — ’til the program either (a) worked, or (b) got so crazy that you deleted it and loaded Bruce Lee off a cassette. Sometimes you hit the sweet spot between the two: my “Chess” AI would grab control of my pieces Agent Smith-style and send them hurtling towards my undefended King. I never saw this as a bug, though; I just thought it had style.

When I started writing this post I intended to make a broader point about how my experience with Games mirrors the way that modern implementers feel when faced with a mysterious, unjustified cryptographic standard. I think there is a point to be made here, and I’ll make it. Another day.

But when I googled to see what Hartnell is up to, I was saddened to learn that he died all the way back in 1991, only a few years after Games was published. He was only a few years older than I am today. So on reflection, I think I’ll just let this post stand as it is, and I’ll go spend some time with my kid.

Tim, wherever you are, please accept this belated tribute. Your book meant a lot to me.

A brief note on end-of-year giving

I wanted to take a quick break from the technical to bug you about something important.

The end of the year is coming up and no doubt there are some folks thinking about last minute charitable donations. There are many, many worthy causes you can support. All things being equal, I’d give to my local food bank first, given how much need there is, and how far these institutions can stretch a charitable dollar ($1 at a food bank buys the equivalent of $20 at a retail supermarket).

But if you have something left over I’d strongly recommend that you give to the Electronic Frontier Foundation. In case you haven’t noticed, there’s a lot of crazy stuff going on with technology and the law these days. I recently poked fun at how small the EFF’s budget is, but I meant it with love (and with reason!). They’re fighting a tough uphill battle with minimal resources.

I have a personal reason for supporting the EFF. Back when I was a grad student, some colleagues and I reverse-engineered a commercial device as part of a research project. This is something that security researchers do from time to time, and it’s something we should be able to do. Our goal was expose flaws in industrial security systems, and hopefully to spur the adoption of better technology. (Note: better technology is now out there, and no, I’m not taking credit. But scrutiny is generally a good thing.)

Anyway, we knew that there were legal obstacles related to this work, we just didn’t realize how significant they’d be. When we first disclosed our findings, there were some… unpleasant phone calls at high levels. The University’s legal counsel politely informed us that in the event of a lawsuit — even a frivolous one — we’d be bearing the expense on our own. This is not a pleasant prospect for a newly-married grad student who’s just signed mortgage papers.

It’s possible that without the EFF we’d have called the whole thing off right then. But the EFF did support us. They took our case (for free!), and worked miracles.

While our story has a happy ending, white hat security research in the US is still a minefield. Sadly this state of affairs doesn’t seem to be improving. The EFF is just about the only group I know of that stands up for security researchers. Even if you’re not a researcher, you probably benefit indirectly from their work.

So please take a minute to donate. It’s tax deductible and some employers will match. If you donate at least $65 and become a member, they’ll even send you an awesome T-shirt (I have one from 1999 that’s still going strong — it’s ugly as sin but damn, the build quality is high.) Again, I’m not saying this should be the only donation you make this year, but it certainly would be a good one.

Liveblogging WWII: December 12, 1941

In mid-December 1941, Driscoll finally sent the British some information on her special method, with only cursory answers to the few questions Denniston had posed four months before. Driscoll again declared her faith in her approach, but GCCS concluded that it “apparently failed.” For one thing, it could not, as she had claimed, overcome the problem of turnover — that is, the tumbling of the Enigma’s wheels before enough letters had been enciphered to identify the wheel being used. And as Turing pointed out to her in a letter in October 1941, her method would take seventy-two thousand hours — more than eight years — to find a solution. Given Driscoll’s obstinacy, Bletchley park began to have second thoughts about providing more technical information.

As luck would have it, an apparent mix-up in the mail delivery between OP20G and Bletchley Park soon brought the simmering distrust and jealousies between the two agencies flaring the the surface. Dennison’s early October dispatch — a bag of materials containing detailed answers to all but one of Driscoll’s questions — never reached OP20G, the Navy claimed. It didn’t take long for Safford, who feared the British were breaking their promises, to push Leigh Noyes into firing off a series of complaints to the British. Through November and December 1941, angry memos and accusations flew across the Atlantic. Noyes didn’t mince his words: Britain had broken its promise to OP20G; American had no use for the Bombe; and if GCCS cooperated, Driscoll could have her method working on real problems. …

Noyes, who was unaware of the complexities of the mail mix-up, continued to fire off angry memos to the British, some of them clearly threatening. The U.S. Navy, he said, had never agreed to confine itself to Enigma research. It had always intended to be “operational” — that is, intercepting and decoding messages on its own. He told Hastings that all the Navy wanted from the British was the information on the Enigma and the codebooks and Enigma machine that Safford and Driscoll had requested.

Then, belying later histories of GCCS and OP20G relations, Noyes apologized to the British, twice. On December 10 and again on the twelfth, he declared that British explanations and actions since his outbursts had satisfied him and “everyone” at OP20G. The missing package, of course, was found. On December 13, Bletchley received a cryptic yet pointed message from someone in the U.S. Navy Department: “Luke Chapter 15, v 9: And she found it. She calleth together her friends and neighbors saying: Rejoice with me for I have found the piece which we lost”.

— Jim DeBrosse, Colin Burke: The secret in Building 26

Is there an Enigma bubble?

First it was .com stocks, then it was housing. Now it’s WWII-era German Enigma machines. From a recent CNN story:

An Enigma machine which featured in a Hollywood movie about the codebreakers of World War II has smashed auction estimates and sold for a world record price. 

The encoding device sparked a three-way bidding war when it went under the hammer at Christie’s in London Thursday, selling for £133,250 ($208,137) — more than double the upper estimate of £50,000. 

Christie’s said the previous record for an Enigma machine was £67,250, at the same auction house, in November 2010.

I for one would love to own an Enigma. But unless it’ll lead me to a cache of buried Nazi gold I have to draw the line at $100,000. It’s not like the Enigma algorithm is getting better.

But lack of funding doesn’t mean you shouldn’t be creative.

When I worked at AT&T I was told an (apocryphal?) story about a noted cryptographer who couldn’t afford to purchase an Enigma for himself, so he set out instead to blackmail one out of the NSA. Allegedly it took him only four conference submissions before they gave in. The last paper described how to attack a significant cryptosystem with paper and pencil.

This sounds so improbable that I can’t believe it really happened — which means that it probably did. If anyone knows the story and has a source for it, please drop me a line.

Matt Green smackdown watch (Are AEAD modes more vulnerable to side-channel attacks?)

Apropos of my last postColin Percival tweets:

I still think encrypt+MAC trumps AEAD because of side channel attacks on block ciphers.

AEAD stands for Authenticated Encryption with Associated Data, and it describes several new modes of operation that perform encryption and authentication all in one go, using a block cipher and a single key, rather than a separate MAC.* In my last post I recommended using one of these things, mostly because it’s simpler.

Colin thinks you’re better off using traditional encryption plus a separate hash-based MAC, e.g., HMAC (using a separate key), rather than one of these fancy new modes. This is because, at least in theory, using a block cipher for authentication could make you more vulnerable to side channel attacks on the block cipher.**

This tweet is followed by some back and forth, which becomes amusing when I fail to read a simple diagram and make a fool of myself. Also, I step on a rake.

My foolishness aside, this point deserves some unpacking. Let’s grant — without comment — the proposition that block-ciphers are vulnerable to side-channel analysis and HMAC-SHAx isn’t. Or at very least, if it is vulnerable, we’re only going to expose the MAC key, and we don’t care about that.***

Let’s also grant that our implementation is free of better vulnerabilities, and that side-channel attacks on block ciphers are actually where our attacker’s going to hit us.

So now we’re talking about three separate questions:

    1. Will using Encrypt + MAC (vs an AEAD) protect you from side-channel attacks on encryption? Trivial answer: no. You’re using the block cipher to encrypt, so who cares what authentication you perform afterwards.

But ok, maybe your side-channel attack requires the device to encrypt known, or chosen plaintexts, and it won’t do that for you. So you need something stronger.

  • Will using Encrypt + Mac (vs an AEAD) save you from side-channel attacks that work against the decryption of arbitrary ciphertexts? Answer: again, probably not. If you can get your hands on some legit ciphertexts (replays, for example), you can probably exercise the block cipher. At least in theory, this should be enough to implement your attack.
  • Will using Encrypt + Mac (vs an AEAD) save you from side-channel attacks that require decryption of known, or chosen ciphertexts? Answer: this may be the case where the means of authentication really matters.

So let’s drill into case (3) a bit. If you’re using Encrypt + MAC with a hash-based MAC (and a separate key), then the block cipher only comes into play for legitimate messages. Your decryption process simply terminates when it encounters an invalid MAC. This should prevent you from submitting chosen or mauled ciphertexts — you’re limited to whatever legit ciphertexts you can get your hands on.

On the other hand, if you’re using an AEAD mode of operation — which typically uses a single key for both authentication and encryption — then technically your block cipher (and key) come into play for every ciphertext received, even the invalid ones.

Alright, let’s dig into the modes a bit to see what kinds of encipherment/decipherment actually happens when you submit a ciphertext (and its IV, associated data) to be decrypted:

  • GCM mode: at very minimum, the decryptor will encipher the supplied IV. Technically, this is the only encipherment that needs to happen in order to check that the authentication tag is valid.**** If it’s not valid, GCM decryption can reject the ciphertext before going further.

Since the IV is typically adversarially-selected (in part), this gives the adversary a chance to exercise the encipherment mode of the cipher on a single partially-chosen block. One block doesn’t sound bad — however, he may be able to submit many such chosen ciphertexts.

  • CCM mode: CCM is a combination of CTR-mode encryption with CBC-MAC. Since the MAC is computed over the plaintext, the CCM decryptor can’t verify (and hence reject a bad ciphertext) until after he’s completely decrypted it. Decryption is actually encipherment of a set of known counter values, the first of which is (partly) chosen by the adversary.
  • OCB mode: Difficult to say. It’s the same as CCM, as far as decryption before MAC testing. However, this complicated by the fact that the cipher is ‘tweaked’ in a DES-X-type construction. And honestly, nobody uses it anyway.
  • EAX mode: the good news is that the MAC is computed on the ciphertext, so the decryptor shouldn’t decrypt the ciphertext unless the MAC is valid. The bad news is that the MAC is a CBC-style MAC (OMAC), using the same key as for decryption, so authentication will encipher at least some chosen values.

So where are we? Pretty far down the rabbit hole.

I’m going to go with the following: if you’re deeply afraid of side-channel attacks on your block cipher, you might feel marginally better about with Encrypt + MAC if your application is definitely not vulnerable to cases (1) and (2) above and you’re positive that HMAC-SHAx isn’t vulnerable to side-channel attacks. Otherwise I’d use an AEAD just to make my life simpler.

But I’m not passing judgement on this. It’s a new perspective to me, and I’m genuinely curious to see what others have to say. Can people recommend any practical attacks, papers, opinions on this subject?

Notes:

* Includes GCM, CWC, OCB, EAX and CCM modes.

** Attacks like the recent attack on the Mifare DESFire, or timing/cache timing attacks on AES.

*** Of course, if you do expose the MAC key through one side-channel attack, then you might be able to do something more sophisticated against the encryption algorithm. But my head is already hurting too much.

**** Technically, checking GHASH also requires you to encipher the 0 message under the cipher key, but that can be done beforehand. It doesn’t need to happen each time you decrypt.

Academic vs. commercial cryptographers

Luther Martin has a post on the difference between academic andcommercial cryptographers. You should read the whole thing, but I wanted to add my $.02 on this part:

I don’t have any firsthand experience with this, but I’ve heard stories of how people in the academic world who try to approach cryptography from the point of view of commercial cryptographers also encounter problems. The places where they work typically put more value on inventing new things, so practical implementations are often considered not as good as less practical, yet new, inventions.

I do have some firsthand experience with this. And Luther’s basically right.

I’m fortunate to have a foot in both the commercial and academic worlds. On the one hand, this means that I get to spend my days working with real products, which is fascinating because, well, it’s fascinating. And it’s relevant.

Unfortunately, from a technological point of view, commercial crypto doesn’t exactly set the world on fire. Once in a while you get to work with a company which is doing something interesting, like Voltage or PGP. But for the most part you’re playing with the same basic set of tools.

Therefore, when I have a chance to do research, I tend to gravitate to the purely academic. This includes protocols that enhance user privacy — stuff like this.

I will cheerfully admit that there’s about a 1% chance that any of my academic work will be deployed in this decade. And I’m ok with that! I enjoy solving problems, and I like that in crypto research, at least, we’re not slaves to the immediate practical.

But maybe as academics, we take this too far.

I advise some grad students, and one of my sad duties is to inculcate them with this understanding: you can do whatever you want in your career, but if you want to get published, the absolute worst thing is be too practical. Don’t kill yourself implementing some cryptosystem that’s practical and deployable, unless there’s something extremely sexy and new in it. And if that’s the case, try not to waste all that time implementing it in the first place! The reviewers (mostly) don’t care.

This is problematic, since in my opinion there’s a huge gap between commercial work and academic crypto. This includes a big category of technologies that we need in order to make (secure) crypto easier to deploy. But none of the incentives are there to support this kind of research.

Despite this, I’m trying to shift some of my work in that direction. This means a whole lot of time-consuming work writing tools like this one. Building this kind of tool is a pre-requisite to doing real research. Unfortunately it requires a lot of scut work, and that’s not going to get anyone a ton of sexy research publications. Still someone needs to do it.

I’m not really sure what to do about this, and I sure hope it changes at some point.

Human error is something to be engineered around, not lamented

Random thought of the day, apropos of this comment by Jon Callas:

We know that the attack against EMC/RSA and SecureID was done with a vuln in a Flash attachment embedded in an Excel spreadsheet. According to the best news I have heard, the Patient Zero of that attack had had the infected file identified as bad! They pulled it out of the spam folder and opened it anyway. That attack happened because of a security failure on the device that sits between the keyboard and chair, not for any technology of any sort.

Quite frankly, if this is what qualifies as human error in a security system, then we’re all in deep trouble. We’re stuck with it. We’re born to it.

I’ll assume one of two things happened here:

  1. An AV scanning system identified a known signature inside of an attachment, recognized that this could be an exploit, and responded to this very serious issue by moving the file into the SPAM folder, where it joined many other legitimate messages that were improperly marked as spam.
  2. A Spam filter noticed something funny about a header, and moved the file into the SPAM folder, something it probably does eight times per week for no reason at all.

Unless your users are superhuman, the problem here is not the user. It’s the system. If the file legitimately contained a vulnerability, it shouldn’t have been moved into the SPAM filter where it could easily be mistaken for a random false positive.

If, on the other hand, the problem was just something to do with the headers, then maybe the user was just doing what was normal — pulling a probable false positive out of their spam folder, just like they did every day.

People are not superhuman. They react to the inputs you give them: GIGO applies. If security systems give people crap inputs, then they’ll make crap decisions. Fixing this problem is our job. We don’t get to complain every time a user does something perfectly understandable in response to bad data that we (security system designers) give them.

And of course, this leaves aside the basic fact that the master seed was available to this attack in the first place, something that boggles the mind… But I guess that’s all been said.

Non-governmental crypto attacks

Over on Web 1.0, Steve Bellovin is asking an interesting question:

Does anyone know of any (verifiable) examples of non-government enemies exploiting flaws in cryptography?  I’m looking for real-world attacks on short key lengths, bad ciphers, faulty protocols, etc., by parties other than governments and militaries.  I’m not interested in academic attacks — I want to be able to give real-world advice — nor am I looking for yet another long thread on the evils and frailties of PKI.

The responses vary from the useful to the not-so-useful, occasionally punctuated by an all-out flamewar — pretty much par for the course in these things.

Here are a few of the responses that sound pretty reasonable. They’re (mostly) not mine, and I’ve tried to give credit where it’s due:

  1. Cases of breached databases where the passwords were hashed and maybe salted, but with an insufficient work factor enabling dictionary attacks.*
  2. NTLMv1/MSCHAPv1 dictionary attacks.*
  3. NTLMv2/MSCHAPv2 credentials forwarding/reflection attacks.*
  4. The fail0verflow break of poorly-nonced ECDSA as used in the Sony PlayStation 3.*
  5. DeCSS.*
  6. Various AACS reverse-engineering efforts.
  7. The HDCP master key leak.*
  8. Various attacks on pay satellite TV services.****
  9. GSM decryption, which seems to have gone beyond the academic and into commercial products.
  10. Factoring of the Texas Instruments 512-bit firmware signing key for calculators, and Elcomsoft’s factoring of the Quicken backup key.**
  11. Key recovery in WEP.
  12. Exploits on game consoles: the original XBox,*** Wii software signing.

There’s also some debate about recent claims that 512-bit RSA certificate signing keys were factored and used to sign malware. As much as I’d like to believe this, the evidence isn’t too solid. Some posters claim that there were also 1024-bit keys used in these attacks. If that’s true, it points more to key theft (aka Steve’s ‘evils and frailties of PKI’).

You’ll also notice I’m leaving lots of stuff off of this list, only because I don’t know of any specific attacks based on it. That would include all the padding oracle attacks of late, the BEAST attack on TLS, bad Debian keys, and so on.

So what’s the takeway from all of this? Well, it’s complicated. A quick glance at the list is enough to tell us that there are plenty of ‘real people’ (aka non-professional cryptographers) out there with the skills to exploit subtle crypto flaws. That definitely supports my view that proper crypto implementation is important, and that your code will be exploited if you screw it up.

Some people may take comfort from the fact that there’s no crypto ‘pearl harbor’ on this list, i.e., the cryptographic equivalent of a Conficker or Stuxnet. I would say: don’t get too cocky. Sure, software security is a mess, and it’s a whole lot easier to set up a dumb fuzzer than to implement sophisticated crypto exploits. (No offense to dumb fuzzers — I’m friends with several.)

But on the other hand, maybe this is misleading. We mostly learn about software 0days from mass malware, which is relatively easy to catch. If sophisticated crypto exploits are being implemented, I would guess that they’re not going into retail worms and trojans — they’re being very quietly applied against high-value targets. Banking systems, for example.

But again, this is just speculation. What do you think?

Notes:

* Marsh Ray.

** Solar Designer.

*** Tom Ritter.

**** commenter “Swiss Made”, below.

The first rule of vulnerability acknowledgement is: there is no vulnerability acknowledgement

Just for fun, today we’re going to look at two recent vulnerability acknowledgements. The first one’s pretty mild; on the Torino scale of vulnerability denial, it rates only about a three:

The research team notified Amazon about the issues last summer, and the company responded by posting a notice to its customers and partners about the problem. “We have received no reports that these vulnerabilities have been actively exploited,” the company wrote at the time. 

But this one from RSA, wow. The charts weren’t made for it. I suggest you read the entire interview, perhaps with a stiff drink to fortify you. I warn you, it only gets worse.

If our customers adopted our best practices, which included hardening their back-end servers, it would now become next to impossible to take advantage of any of the SecurID information that was stolen.

… We gave our customers best practices and remediation steps. We told our customers what to do. And we did it quickly and publicly. If the attackers had wanted to use SecurID, they would want to have done it quietly, effectively and under the covers. The fact that we announced the attack immediately, and the fact that we gave our customers these remediation steps, significantly disadvantaged the attackers from effectively using SecurID information.

… We think because we blew their cover we haven’t seen more evidence [of successful attacks].

I have a paper deadline midweek, so blogging will be light ’til then. Once that’s done, I’ll have something more substantial to say about all this.