OpenSSL and NSS are FIPS 140 certified. Is the Internet safe now?

People like standards. The more important something is, the greater the likelihood that someone, somewhere has drawn up a standard for how it should be done. Although this can get out of hand (viz: the EU standard for cucumbers), I do think that standards can serve a useful purpose. This is particularly true when you’re talking about something as complicated (and easy to screw up) as encrypting data on the Internet.

Given all this, you’d expect that I’d be excited, nay thrilled, that two of the major crypto libraries in use today — OpenSSL and Mozilla’s NSS — offer a FIPS-approved mode. Our children can sleep safely at night! And yet, having spent part of the weekend poring over nightmarish NSS code, I feel obliged to offer the following opinion: FIPS compliance, while certainly not a bad thing, doesn’t really have anything to do with the security of a cryptographic library. Certainly not a library like OpenSSL or NSS.

For those who have no clue what FIPS is, let me start from the beginning.

If there’s one organization that really likes standards, it’s the US Federal government. For IT alone the US has an entire fleet of standards known collectively as FIPS — the Federal Information Processing Standards. These deal with lots of boring stuff like the standard codes to be used when referencing US states. More interestingly, FIPS also includes the standards for designing cryptographic modules. These are laid out in a series of documents headlined by FIPS 140-2 (obligatory warning: do not drive or operate heavy machinery while reading FIPS publications).

FIPS 140-2 outlines a set of requirements that must be adhered to by any US-government certified cryptographic module. This applies to products purchased by Federal agencies, and implicitly to outside systems that exchange sensitive-but-unclassified data with Federal computers (think: banks). To give these standards some teeth, the US NIST and Canada’s CSEC jointly run something called the Cryptographic Module Validation Program (CMVP). Only serious cryptographic modules survive CMVP, at least in theory. So NSS and OpenSSL’s validation is a Big Deal.

Like I said, that’s the theory.

To understand why I’m not so excited about FIPS, you have to understand the what, who and why. Specifically, what the heck a ‘cryptographic module’ is, who does the validation, and why we’re validating these modules in the first place.

What is a cryptographic module?

A 1970s-era hardware voice encryption
module.

To understand the FIPS viewpoint on this, you need to hark back to the early 90s when the FIPS 140-1 standard was first being hashed out. You also need to pretend that you worked for the US DoD in the previous decades. At this point you should have a very clear picture of a cryptographic module in your head. It looks something like the picture on the right.

Ok, so that’s a Dutch voice encryption module, and it dates from 1973. The point I’m trying to make is that a cryptographic module is a piece of hardware. It’s contained within some kind of metal enclosure. Plaintext data comes in on a physical port, encrypted data comes out of another. Keys are entered by punching them into the front, or alternatively, by inserting some kind of key storage token.

In fact, to achieve “FIPS 140 Level 2” (or higher) certification, you still need hardware. But what does FIPS 140-2 tell us about evaluating pure software cryptographic modules? How do we map these hardware rules onto something like OpenSSL?

Mostly by doing a lot of creative re-interpretation. Module boundaries have to become machine boundaries. Logical boundaries have to be defined around the software itself. The flow of data across these boundaries has to be relentlessly mapped into a hardware metaphor, even if using that metaphor doesn’t provide any tangible security benefit.

(To give a concrete example: FIPS key management at higher levels requires entering only encrypted keys. Neat. But if the module is a piece of software, what’s the point? You might find yourself encrypting keys on one machine, so that they can be decrypted in another process on the same machine. This is silly. But if that’s how NIST will interpret the standard, then that’s what we’ll do.)

In fairness, a new iteration of the standard — FIPS 140-3 — is supposed to make validation of software modules a whole lot clearer. Unfortunately the finalization date is currently listed as TBD. Moreover, when it comes to government standards, sometimes you’re better off with the devil you know.

Who does the validation?

Here’s another thing about CMVP to keep in mind. It’s not a free service offered by the US government to those modules it finds really important.

Yes, final approval of modules is performed by a team at NIST. However, this is only the tip of the iceberg. In order to reach NIST, you need to first retain a private testing laboratory, who will actually do most of the validation. NIST mostly evaluates the results of this process. You have a choice of labs, and yes, they compete on price. I’ve had the good fortune of working with excellent testing labs, but that doesn’t mean that my results are typical.

Why (and what) are we validating?

Finally, we get to the most important question: why are you evaluating the module. Actually, this is more of a what question — what does the evaluation look for, what kind of bugs is it going to turn up, and how much assurance do you really have at the end of the process.

And this is the depressing part. FIPS is not about detailed software evaluation. It’s not code review, at least not the kind of rigorous code review that will find the inevitable (and devastating) software vulnerabilities that your development team missed. It’s certainly not going to detect subtle cryptographic attacks that stem from bad use of padding or the presence of useful timing channels. And it’s definitely not going to detect clever backdoors.

What it is going to tell you is whether you have a solid checklist describing how the system should be used. It’ll tell whether you’re using some sort of authentication to launch the module. It’ll ensure that you’re using only FIPS-approved algorithms, and only in a way that’s specified by NIST. Finally, validation will tell you whether you’re implementing those algorithms correctly — or more accurately, whether your outputs match a set of test vectors provided by your testing lab, which is mostly, but not entirely the same thing.

These are all good things, and they probably serve to lop off the low-hanging fruit — the genuinely dumb bugs that lead to obvious compromise. (As a side benefit, FIPS prohibits the use of RC4.) Moreover, the process of getting through CMVP is so time-consuming that it probably increases the likelihood that your engineers will find bugs in the code along the way. This is a good thing, but it’s hardly a first order effect.

So what does this have to do with NSS and OpenSSL?

Good question. What are the problems in these libraries, and how is FIPS supposed to solve them?

The big problem that I see with both NSS and OpenSSL is not that they use bad algorithms, or that they have bad security policies. Rather, it’s that their code is uncommented and hard to read, which occasionally hides serious (but extremely subtle) vulnerabilities. Moreover, even when the implementation is faithful, they each support legacy standards that contain a lot of questionable nonsense (e.g., obsolete padding schemes) which needs to be hacked around. The above issues would be academic, except that to a surprising extent these two libraries are responsible for securing the Internet.

Even more problematic is that between OpenSSL and NSS we have two separate and competing libraries, in a world with enough interested brains and eyeballs to review at most one. Optimistically.

This is how NSS managed to stay vulnerable to the BEAST attack until this year, despite the fact that OpenSSL had ‘empty fragment‘ mitigations in place since 2002. Similarly, it’s how OpenSSL managed all of this. It’s why OpenSSL code often looks like this, and why NSS is no better.

Unfortunately the problems that FIPS evaluation solves are mostly orthogonal to these.

Now, to be fair, nobody in either the OpenSSL project or Mozilla is claiming that FIPS compliance makes these libraries magically secure. I’m sure they not only know better, but they curse FIPS privately behind closed doors because it requires a whole lot of extra code in exchange for a dubious security benefit. Moreover, both libraries are developed by smart people and have done fairly well considering what they’re up against.

Still, there’s a huge need for a real security standard for cryptographic modules, one that takes into account all of the things we know can go wrong. At very least we could mandate a coding standard with requirements for commenting, indentation, sensible error handling, and variable names that aren’t plain crazy. (Even if a single review doesn’t find the bugs, mandating a coding standard could increase the probability that others do.)

The whole point of this post is to remind readers that FIPS 140-2 is not that standard. Encrypt with caution.

One thought on “OpenSSL and NSS are FIPS 140 certified. Is the Internet safe now?

Comments are closed.