Monday, March 19, 2012

Why Antisec matters

A couple of weeks ago the FBI announced the arrest of five members of the hacking group LulzSec. We now know that these arrests were facilitated by 'Anonymous' leader* "Sabu", who, according to court documents, was arrested and 'turned' in June of 2011. He spent the next few months working with the FBI to collect evidence against other members of the group.

This revelation is pretty shocking, if only because Anonymous and Lulz were so productive while under FBI leadership. Their most notable accomplishment during this period was the compromise of Intelligence analysis firm Stratfor -- culminating in that firm's (rather embarrassing) email getting strewn across the Internet.

This caps off a fascinating couple of years for our field, and gives us a nice opportunity to take stock. I'm neither a hacker nor a policeman, so I'm not going to spend much time why or the how. Instead, the question that interests me is: what impact have Lulz and Anonymous had on security as an industry?

Computer security as a bad joke

To understand where I'm coming from, it helps to give a little personal background. When I first told my mentor that I was planning to go back to grad school for security, he was aghast. This was a terrible idea, he told me. The reality, in his opinion, was that security was nothing like Cryptonomicon. It wasn't a developed field. We were years away from serious, meaningful attacks, let alone real technologies that could deal with them.

This seemed totally wrong to me. After all, wasn't the security industry doing a bazillion dollars of sales ever year? Of course people took it seriously. So I politely disregarded his advice and marched off to grad school -- full of piss and vinegar and idealism. All of which lasted until approximately one hour after I arrived on the floor of the RSA trade show. Here I learned that (a) my mentor was a lot smarter than I realized, and (b) idealism doesn't get you far in this industry.

Do you remember the first time you met a famous person, and found out they were nothing like the character you admired? That was RSA for me. Here I learned that all of the things I was studying in grad school, our industry was studying too. And from that knowledge they were producing a concoction that was almost, but not quite, entirely unlike security.

Don't get me wrong, it was a rollicking good time. Vast sums of money changed hands. Boxes were purchased, installed, even occasionally used. Mostly these devices were full of hot air and failed promises, but nobody really cared, because after all: security was kind of a joke anyway. Unless you were a top financial services company or (maybe) the DoD, you only really spent money on it because someone was forcing you to (usually for compliance reasons). And when management is making you spend money, buying glossy products is a very effective way to convince them that you're doing a good job.

Ok, ok, you think I'm exaggerating. Fair enough. So let me prove it to you. Allow me to illustrate my point with a single, successful product, one which I encountered early on in my career. The product that comes to mind is the Whale Communications "e-Gap", which addressed a pressing issue in systems security, namely: the need to put an "air gap" between your sensitive computers and the dangerous Internet.

Now, this used to be done (inexpensively) by simply removing the network cable. Whale's contribution was to point out a major flaw in the old approach: once you 'gap' a computer, it no longer has access to the Internet!

Hence the e-Gap, which consisted of a memory unit and several electronic switches. These switches were configured such that the memory could be connected only to the Internet or to your LAN, but never to both at the same time (seriously, it gives me shivers). When data arrived at one network port, the device would load up with application data, then flip 'safely' to the other network to disgorge its payload. Isolation achieved! Air. Gap.

(A few pedants -- damn them -- will try to tell you that the e-Gap is a very expensive version of an Ethernet cable. Whale had a ready answer to this, full of convincing hokum about TCP headers and bad network stacks. But really, this was all beside the point: it created a freaking air gap around your network! This apparently convinced Microsoft, who later acquired Whale for five times the GDP of Ecuador.)

Now I don't mean to sound too harsh. Not all security was a joke. There were plenty of solid companies doing good work, and many, many dedicated security pros who kept it from all falling apart.

But there are only so many people who actually know about security, and as human beings these people are hard to market. To soak up all that cybersecurity dough you needed a product, and to sell that product you needed marketing and sales. And with nobody actually testing vendors' claims, we eventually wound up with the same situation you get in any computing market: people buying garbage because the booth babes were pretty.**

Lulz, Anonymous and Antisec

I don't remember when I first heard the term 'Antisec', but I do remember what went through my mind at the time: either this is a practical joke, or we'd better harden our servers.

Originally Antisec referred to the 'Antisec manifesto', a document that basically declared war on the computer security industry. The term was too good to be so limited, so LulzSec/Anonymous quickly snarfed it up to refer to their hacking operation (or maybe just part of it, who knows). Wherever the term came from, it basically had one meaning: let's go f*** stuff up on the Internet.

Since (per my expanation above) network security was pretty much a joke at this point, this didn't look like too much of a stretch.

And so a few isolated griefing incidents gradually evolved into serious hacking. It's hard to say where it really got rolling, but to my eyes the first serious casualty of the era was HBGary Federal, who -- to be completely honest -- were kind of asking for it. (Ok, I don't mean that. Nobody deserves to be hacked, but certainly if you're shopping around a plan to 'target' journalists and civilians you'd better have some damned good security.)

In case you're not familiar with the rest of the story, you can get a taste of it here and here. In most cases Lulz/Anonymous simply DDoSed or defaced websites, but in other cases they went after email, user accounts, passwords, credit cards, the whole enchilada. Most of these 'operations' left such a mess that it's hard to say for sure which actually belonged to Anonymous, which were criminal hacks, and which (the most common case) were a little of each.

The bad
So with the background out of the way, let's get down to the real question of this post. What has all of this hacking meant for the security industry?

Well, obviously, one big problem is that it's making us (security folks) look like a bunch of morons. I mean, we've spent the last N years developing secure products and trying to convince people if they just followed our advice they'd be safe. Yet when it comes down to it, a bunch of guys on the Internet are walking right through it.

This is because for the most part, networks are built on software, and software is crap. You can't fix software problems by buying boxes, any more than, say, buying cookies will fix your health and diet issues. The real challenge for industry is getting security into the software development process itself -- or, even better, acknowledging that we never will, and finding a better way to do things. But this is expensive, painful, and boring. More to the point, it means you can't outsource your software development to the lowest bidder anymore.

Security folks mostly don't even try to address this. It's just too hard. When I ask my software security friends why their field is so terrible (usually because they're giving me crap about crypto), they basically look at me like I'm from Mars. The classic answer comes from my friend Charlie Miller, who has a pretty firm view of what is, and isn't his responsibility:
I'm not a software developer, I just break software! If they did it right, I'd be out of a job.
So this is a problem. But beyond bad software, there's just a lot of rampant unseriousness in the security industry. The best (recent) example comes from RSA, who apparently forgot that their SecurID product was actually important, and decided to make the master secret database accessible from a single compromised Windows workstation. The result of this ineptitude was a series of no-joking-around breaches of US Defense Contractors.

While this has nothing to do with Anonymous, it goes some of the way to explaining why they've had such an easy time these past two years.

The good
Fortunately there's something of a silver lining to this dark cloud. And that is, for oncepeople finally seem to be taking security seriously. Sort of. Not enough of them, and maybe not in the ways that matter (i.e., building better consumer products). But at least institutionally there seems to be a push away from the absolute stupid.

There's also been (to my eyes) a renewed interest in data-at-rest encryption, a business that's never really taken off despite its obvious advantages. This doesn't mean that people are buying good encryption products (encrypted hard drives come to mind), but at least there's movement.

To some extent this is because there's finally something to be scared of. Executives can massage data theft incidents, and payment processors can treat breaches as a cost of doing business, but there's one thing that no manager will ever stop worrying about. And that is: having their confidential email uploaded to a convenient, searchable web platform for the whole world to see.

The ugly 

The last point is that Antisec has finally drawn some real attention to the elephant in the room, namely, the fact that corporations are very bad at preventing targeted breaches. And that's important because targeted breaches are happening all the time. Corporations mostly don't know it, or worse, prefer not to admit it.

The 'service' that Antisec has provided to the world is simply their willingness to brag. This gives us a few high-profile incidents that aren't in stealth mode. Take them seriously, since my guess is that for every one of these, there are ten other incidents that we never hear about.***

In Summary

Let me be utterly clear about one thing: none of what I've written above should be taken as an endorsement of Lulz, Anonymous, or the illegal defacement of websites. Among many other activities, Anonymous is accused of hacking griefing the public forums of the Epilepsy Foundation of America in an attempt to cause seizures among in its readers. Stay classy, guys.

What I am trying to point out is that something changed a couple of years ago when these groups started operating. It's made a difference. And it will continue to make a difference, provided that firms don't become complacent again.

So in retrospect, was my mentor right about the field of information security? I'd say the jury's still out. Things are moving fast, and they're certainly interesting enough. I guess we'll just have to wait and see where it all goes. In the meantime I can content myself with the fact that I didn't take his alternative advice -- to go study Machine Learning. After all, what in the world was I ever going to do with that?


* Yes, there are no leaders. Blah blah blah.

** I apologize here for being totally rude and politically incorrect. I wish it wasn't true.

*** Of course this is entirely speculation. Caveat Emptor.

Thursday, March 15, 2012

How do Interception Proxies fail?

I have some substantive posts in the works, but mostly this week hasn't been good for blogging. In the meantime, I wanted to point readers to this fascinating talk by researcher Jeff Jarmoc, which I learned about through the Corelan team blog:
SSL/TLS Interception Proxies and Transitive Trust 
SSL/TLS is entrusted with securing many of the communications services we take for granted in our connected world. Threat actors are also aware of the advantages offered by encrypted communication channels, and increasingly utilize encryption for exploit delivery, malware command-and-control and data exfiltration. 
To counter these tactics, organizations are increasingly deploying security controls that intercept end-to-end SSL/TLS channels. Web proxies, DLP systems, specialized threat detection solutions, and network IPSs now offer functionality to intercept, inspect and filter encrypted traffic. Similar functionality is also present in lawful intercept systems and solutions enabling the broad surveillance of encrypted communications by governments. Broadly classified as "SSL/TLS Interception Proxies," these solutions act as man-in-the-middle, violating the end-to-end security guarantees promised by SSL/TLS
In this presentation we'll explore a phenomenon known as "transitive trust," and explain how deployment of SSL/TLS interception solutions can introduce new vulnerabilities. We detail a collection of new vulnerabilities in widely used interception proxies first discovered by the Dell SecureWorks CTU and responsibly disclosed to the impacted vendors. These vulnerabilities enable attackers to more easily intercept and modify secure communications. In addition, we will introduce a public web site that organizations can use to quickly and easily test for these flaws.
I can't find Jeff's slides or whitepaper at the moment (Update: The slides are now public. There's a lot more to his talk than I cover in this post.) What I can tell from the summary is that Jeff is doing us all an invaluable favor -- essentially, putting his hands deep in the scuzz to find out what's down there.

To make a long story short, the answer is nothing good. The details are in the Corelan post (which, ironically, gives me a TLS error), but to sum it up: Jeff mostly focuses on what interception proxies do when the proxy receives an invalid certificate from a remote website -- for example, one that is expired or revoked.

Normally your browser would be the one dealing with this, but in a MITM scenario you're totally dependent on the proxy. Even if the proxy checks the certificate properly in the first place, they're still in a tough place. They essentially have the following options:
  1. Reject the connection altogether (probably safest)
  2. Give users the option to proceed or abort (no worse than standard TLS)
  3. Ignore the errors and make the connection anyway (run for the hills!) 
Jeff correctly points out that option (3) is the most dangerous, since it opens users up to all kinds of bad TLS connections that would normally ring alarm bells in your browser. Worse, this seems to be the default policy of a number of commercial interception proxies, mostly for unintentional/stupid reasons.

Beyond these default settings, it seems to be that there's another question here, namely: how are these devices being configured in the field? My guess is that this depends greatly on whether the "victims" of interception know that their TLS traffic is being monitored. If deployers choose to do interception quietly, it could make a big difference in how a proxy will handle cert issues.

I stress that we're now speculating, but let's pretend that ACME corporation wants to intercept its employees' TLS connections, but doesn't actively want to advertise this fact.* This may restrict their options. For one thing, option (2) is probably out, since this would produce obvious messages on the end-users' web browser. Even option (1) might be iffy, since some sites will simply not work, without any decent explanation. Hence -- in speculation land -- one could imagine some organizations deliberately choosing option (3), on the theory that being quiet is better than being secure.**

This is different than the vulnerabilities that Jeff addresses (which mainly deal with devices' default settings), but it's something I've been wondering about since I first heard of the practice. After all, you've gone to all this trouble to get a publicly-rooted MITM CA, now you're going to advertise that you're using it? Maybe, maybe not.

The world of TLS MITM interception is a fascinating one, and I can't possibly learn enough about it. Hopefully we'll soon learn even more, at least about the nasty CA-facilitated variant of it, as CAs start to respond to Mozilla's recent ultimatum.


* They may notify their employees somehow, but that's different from reminding them on a daily basis. This isn't totally nuts: it's one speculative reason for deploying CA-generated MITM certificates, rather than generating an org certificate and installing it throughout your enterprise.

** I suppose there are workarounds for this case, such as re-writing the MITM cert to include the same class of errors (expiration dates, name errors) but I'd be utterly shocked if anyone uses them.

Friday, March 9, 2012

Surviving a bad RNG

A couple of weeks ago I wrote a long post about random number generation, which I find to be one of the most fascinating subjects in cryptography -- mostly because of how terrible things get when people screw it up.

And oh boy, do people screw it up. Back in 2008 it was Debian, with their 'custom' OpenSSL implementation that could only produce 32,768 possible TLS keys (do you really need more?) In 2012 it's 25,000 factorable TLS public keys, all of which appear to have been generated by embedded devices with crappy RNGs.

When this happens, people get nervous. They start to wonder: am I at risk? And if so, what can I do to protect myself?

Answering this question is easy. Answering it in detail is hard. The easy answer is that if you really believe there's a problem with your RNG, stop reading this blog and go fix it!

The more complicated answer is that many bad things can happen if your RNG breaks down, and some are harder to deal with than others.

In the rest of this post I'm going to talk about this, and give a few potential mitigations. I want to stress that this post is mostly a thought-exercise. Please do not re-engineer OpenSSL around any of the 'advice' I give herein (I'm looking at you, Dan Kaminsky), and if you do follow any of my advice, understand the following:
When it all goes terribly wrong, I'll quietly take down this post and pretend I never wrote it.
In other words, proceed at your own risk. First, some background.

What's a 'bad RNG'?

Before we get started, it's important to understand what it means for an RNG to be broken. In general, failure comes in three or four different flavors, which may or may not share the same root cause:
  1. Predictable output. This usually happens when a generator is seeded with insufficient entropy. The result is that the attacker can actually predict, or at least guess, the exact bits that the RNG will output. This has all kinds of implications, none of them good.
  2. Resetting output. This can occur when a generator repeatedly outputs the same stream of bits, e.g., every time the system restarts. When an attacker deliberately brings about this condition, we refer to it as a Reset Attack, and it's a real concern for devices like smartcards.
  3. Shared output. Sometimes exactly the same bits will appear on two or more different devices. Often the owners won't have any idea this is happening until someone else turns up with their public key! This is almost always caused by some hokey entropy source or hardcoded seed value.
These aren't necessarily distinct problems, and they can easily bleed into one another. For example, a resetting RNG can become a predictable RNG once the adversary observes the first round of outputs. Shared output can become predictable if the attacker gets his hands on another device of the same model. The Debian bug, for example, could be classified into all three categories.

In addition to the problems above, there's also a fourth (potential) issue:
4. Non-uniform or biased output. It's at least possible that the output of your generator will exhibit biased outputs, or strings of repeated characters (the kind of thing that tests like DIEHARD look for). In the worst case, it might just start outputting zero bytes.
The good news is that this is relatively unlikely as long as you're using a standard crypto library. That's because modern systems usually process their collected entropy through a pseudo-random generator (PRG) built from a hash function or a block cipher.

The blessing of a PRG is that it will usually give you nice, statistically-uniform output even when you feed it highly non-uniform seed entropy. This helps to prevent attacks (like this one) which rely on the presence of obvious biases in your nonces/keys/IVs. While this isn't a rule, most common RNG failures seem to be related to bad entropy, not to some surprise failure of the PRG.

Unfortunately, the curse is that a good PRG can hide problems. Since most people only see the output of their PRG (rather than the seed material), it's easy to believe that your RNG is doing a great job, even when it's actually quite sick.* This has real-world implications: many standards (like FIPS-140) perform continuous tests on the final RNG/PRG output (e.g., for repeated symbols). The presence of a decent PRG renders these checks largely useless, since they'll really only detect the most catastrophic (algorithmic) failures.

Key generation

When it comes to generating keys with a bad (P)RNG, the only winning move is not to play. Algorithm aside, if an attacker can predict your 'randomness', they can generate the same key themselves. Game over. Incidentally, this goes for ephemeral keys as well, meaning that protocols like Diffie-Hellman are not secure in the presence of a predictable RNG (on either side).

If you think there's any chance this will happen to you, then either (a) generate your keys on a reliable device, or (b) get yourself a better RNG. If neither option is available, then for god's sake, collect some entropy from the user before you generate keys. Ask them to tap a ditty on a button, or (if a keyboard is available), get a strong, unpredictable passphrase and hash it through PBKDF2 to get a string of pseudo-random bits. This might not save you, but it's probably better than the alternative.

What's fascinating is that some cryptosystems are more vulnerable to bad, or shared randomness than others. The recent batch of factorable RSA keys, for example, appears to be the product of poor entropy on embedded devices. But the keys weren't broken because someone guessed the entropy that was used. Rather, the mere fact that two different devices share entropy was enough to make both of their keys factorable.

According to Nadia Heninger, this is an artifact of the way that RSA keys are generated. Every RSA public modulus is the product of two primes. Some devices generate one prime, then reseed their RNG (with the time, say) before generating the second. The result is two different moduli, each sharing one prime. Unfortunately, this is the worst thing you can do with an RSA key, since anyone can now compute the GCD and efficiently factor both keys.

Although you're never going to be safe when two devices share entropy, it's arguable that you're better off if they at least generate the same RSA key, rather than two moduli with a single shared prime. One solution is to calculate the second prime as a mathematical function of the first. An even easier fix is just to make sure that you don't reseed between the two primes.

Of course it's not really fair to call these 'solutions'. Either way you're whistling past the graveyard, but at least this might let you whistle a bit longer.

Digital signatures and MACs

There's a widely held misconception that digital signatures must be randomized. This isn't true, but it's understandable that people might think this, since it's a common property of the signatures we actually use. Before we talk about this, let me stipulate that what we're talking about here is the signing operation itself -- I'm premising this discussion on the very important assumption that we have properly-generated keys.

MACs. The good news is that virtually every practical MAC in use today is deterministic. While there are probabilistic MACs, they're rarely used. As long as you're using a standard primitive like HMAC, that bad RNG shouldn't affect your ability to authenticate your messages.

Signatures. The situation with signatures is a bit more complicated. I can't cover all signatures, but let's at least go over the popular ones. For reasons that have never been adequately explored, these are (in no particular order): ECDSA, DSA, RSA-PKCS#1v1.5 and RSA-PSS. Of these four signatures, three are randomized.

The major exception is RSA-PKCS#1v1.5 signature padding, which has no random fields at all. While this means you can give your RNG a rest, it doesn't mean that v1.5 padding is good. It's more accurate to say that the 'heuristically-secure' v1.5 padding scheme remains equally bad whether you have a working RNG or not.

If you're signing with RSA, a much better choice is to use RSA-PSS, since that scheme actually has a reduction to the hardness of the RSA problem. So far so good, but wait a second: doesn't the P in PSS stand for Probabilistic? And indeed, a close look at the PSS description (below) reveals the presence of random salt in every signature.

The good news is that this salt is only an optimization. It allows the designers to obtain a tighter reduction to the RSA problem, but the security proof holds up even if you repeat the salt, or just hardcode it to zero.

The PSS signing algorithm. MGF is constructed from a hash function.
Having dispensed with RSA, we can get down to the serious offenders: DSA and ECDSA.

The problem in a nutshell is that every (EC)DSA signature includes a random nonce value, which must never be repeated. If you ever forget this warning -- i.e., create two signatures (on different messages) using the same nonce -- then anyone can recover your secret key. This is both easy to detect, and to compute. You could write a script to troll the Internet for repeated nonces (e.g., in X509 certificates), and then outsource the final calculation to a bright eighth-grader.

Usually when DSA/ECDSA go wrong, it's because someone simply forgot to generate a random nonce in the first place. This appears to be what happened with the Playstation 3. Obviously, this is stupid and you shouldn't do it. But no matter how careful your implementation, you're always going to be vulnerable if your RNG starts spitting out repeated values. If this happens to you even once, you need to throw away your key and generate a new one.

There are basically two ways to protect yourself:
  • Best: don't to use (EC)DSA in the first place. It's a stupid algorithm with no reasonable security proof, and as a special bonus it goes completely pear-shaped in the presence of a bad RNG. Unfortunately, it's also a standard, used in TLS and elsewhere, so you're stuck with it.
  • Second best: Derive your nonces deterministically from the message and some secret data. If done correctly (big if!), this prevents two messages from being signed with the same nonce. In the extreme case, this approach completely eliminates the need for randomness in (EC)DSA signatures.

    There are two published proposals that take this approach. The best is Dan Bernstein's (somewhat complex) EdDSA proposal, which looks like a great replacement for ECDSA. Unfortunately it's a replacement, not a patch, since EdDSA uses different elliptic curves and is therefore not cross-compatible with existing ECDSA implementations.

    Alternatively, Thomas Pornin has a proposal up that simply modifies (EC)DSA by using HMAC to derive the nonces. The best part about Thomas's proposal is that it doesn't break compatibility with existing (EC)DSA implementations. I will caution you, however: while Thomas's work looks reasonable, his proposal is just a draft (and an expired one to boot). Proceed at your own risk.

There are various consequences to using a bad RNG for encryption, most of which depend on the scheme you're using. Once again we'll assume that the keys themselves are properly-generated. What's at stake is the encryption itself.

Symmetric encryption. The good news is that symmetric encryption can be done securely with no randomness at all, provided that you have a strong encryption key and the ability to keep state between messages.

An obvious choice is to use CTR mode encryption. Since CTR mode IVs needn't be unpredictable, you can set your initial IV to zero, then simply make sure that you always hang onto the last counter value between messages. Provided that you never ever re-use a counter value with a given key (even across system restarts) you'll be fine.**

This doesn't work with CBC mode, since that actually does require an unpredictable IV at the head of each chain. You can hack around this requirement in various ways, but I'm not going to talk about those here; nothing good will come of it.

Public-key encryption. Unfortunately, public-key encryption is much more difficult to get right without a good RNG.

Here's the fundamental problem: if an attacker knows the randomness you used to produce a ciphertext, then (in the worst case) she can simply encrypt 'guess' messages until she obtains the same ciphertext as you. At that point she knows what you encrypted.***

Obviously this attack only works if the attacker can guess the message you encrypted. Hence it's possible that high-entropy messages (symmetric keys, for example) will encrypt securely even without good randomness. But there's no guarantee of this. Elgamal, for example, can fail catastrophically when you encrypt two messages with the same random nonce.****

Although I'm not going to endorse any specific public-key encryption scheme, it seems likely that some schemes will hold up better than others. For example, while predictably-randomized RSA-OAEP and RSA-OAEP+ will both be vulnerable to guessing attacks, there's some (intuitive) reason to believe that they'll remain secure for high-entropy messages like keys. I can't prove this, but it seems like a better bet than using Elgamal (clearly broken) or older padding schemes like RSA-PKCS#1v1.5.

If my intuition isn't satisfying to you, quite a lot of research is still being done in this area. See for example, recent works on deterministic public-key encryption, or hedged public-key encryption. Note that all of this work makes on the assumption that you're encrypting high-entropy messages.


I can't conclude this post without at least a token discussion of how a bad RNG can affect cryptographic protocols. The short version is that it depends on the protocol. The shorter version is that it's almost always bad.

Consider the standard TLS handshake. Both sides use their RNGs to generate nonces. Then the client generates a random 'Pre-master Secret' (PMS), encrypts it under the server's key, and transmits it over the wire. The 'Master Secret' (and later, transport key) is derived by hashing together all of the nonces and the PMS.

Since the PMS is the only real 'secret' in the protocol (everything else is sent in the clear), predicting it is the same as recovering the transport key. Thus TLS is not safe to use if the client RNG is predictable. What's interesting is that the protocol is secure (at least, against passive attackers) even if the server's RNG fails. I can only guess that this was a deliberate choice on the part of TLS's designers.

SSL handshake (source).
Protocols are already plenty exciting when you have a working RNG. Adding a bad RNG to the mix is like pouring fireworks on a fire. It's at least possible to build protocols that are resilient to one participant losing their RNG, but it's very tricky to accomplish -- most protocols will fail in unexpected ways.

In Summary

If you take nothing else from this post, I hope it's this: using a broken RNG is just a bad idea. If you think there's any chance your generator will stop working, then for god's sake, fix it! Don't waste your time doing any of the stuff I mention above.

That said, there legitimately are cases where your RNG can go wrong, or where you don't have one in the first place. The purpose of this post was to help you understand these scenarios, and the potential consequences for your system. So model it. Think about it. Then spend your time on better things.


 The classic example is Debian's 2008 OpenSSL release, which used a 15-bit process ID as the only seed for its PRG. This wasn't obvious during testing, since the 32,768 possible RNG streams all looked pretty random. It was only after the public release that people noticed that many devices were sharing TLS keys.

** If you're going to do this, you should also be sure to use a MAC on your ciphertext, including the initial counter value for each message.

*** A great example is unpadded, textbook RSA. If m is random, then it's quite difficult to recover m given m^e mod N. If, however, you have a few good guesses for m and you know the public key (N, e), you can easily try each of your guesses and compare the results.

**** Given two Elgamal ciphertexts on the same key and randomness (g^r, y^r*M1), (g^r, y^r*M2) you can easily compute M1/M2. A similar thing happens with hash Elgamal. This may or may not be useful to you, depending on how much you know about the content of the various messages.

Thursday, March 1, 2012

A brief update

My early-week post on the MITM certificate mess seems to have struck a nerve with readers. (Or perhaps I just picked the right time to complain!) Since folks seem interested in this subject, I wanted to follow up with a few quick updates:

  • The EFF has released a new version of HTTPS Everywhere, which includes a nifty 'Decentralized SSL Observatory' feature. This scans for unusual certificates (e.g., MITM certs, certs with weak keys) and reports them back to EFF for logging. A very nice step towards a better 'net.
  • StalkR reminds me that Chrome 18 includes support for Public-key Pinning. This is an HTTP extension that allows a site operator to 'pin' their site to one (or more) pre-specified public keys for a given period of time. A pinned browser will reject any alternative keys that show up -- even if they're embedded in a valid certificate.
  • A couple of readers point out that popular sites (e.g., Google and Facebook) change their certificates quite frequently -- possibly due to the use of load balancers -- which poses a problem for "carry a list of legitimate certs with you" solutions. I recognize this. The best I can say is that we're all better off if bogus certs are easy to detect. Hopefully site operators will find a compromise that makes this easy for us.
Appearances to the contrary, this blog is not going to become a forum for complaining about CAs. I'll be back in a few days with more wonky crypto posts, including some ideas for dealing with bad randomness, some thoughts on patented modes of operation, and an update on the progress that researchers are making with Fully-Homomorphic Encryption.