Monday, March 21, 2016

Attack of the Week: Apple iMessage

Today's Washington Post has a story entitled "Johns Hopkins researchers poke a hole in Apple’s encryption", which describes the results of some research my students and I have been working on over the past few months.

As you might have guessed from the headline, the work concerns Apple, and specifically Apple's iMessage text messaging protocol. Over the past months my students Christina Garman, Ian Miers, Gabe Kaptchuk and Mike Rushanan and I have been looking closely at the encryption used by iMessage, in order to determine how the system fares against sophisticated attackers. The results of this analysis include some very neat new attacks that allow us to -- under very specific circumstances -- decrypt the contents of iMessage attachments, such as photos and videos.

The research team. From left:
Gabe Kaptchuk, Mike Rushanan, Ian Miers, Christina Garman
Now before I go further, it's worth noting that the security of a text messaging protocol may not seem like the most important problem in computer security. And under normal circumstances I might agree with you. But today the circumstances are anything but normal: encryption systems like iMessage are at the center of a critical national debate over the role of technology companies in assisting law enforcement.

A particularly unfortunate aspect of this controversy has been the repeated call for U.S. technology companies to add "backdoors" to end-to-end encryption systems such as iMessage. I've always felt that one of the most compelling arguments against this approach -- an argument I've made along with other colleagues -- is that we just don't know how to construct such backdoors securely. But lately I've come to believe that this position doesn't go far enough -- in the sense that it is woefully optimistic. The fact of the matter is that forget backdoors: we barely know how to make encryption work at all. If anything, this work makes me much gloomier about the subject.

But enough with the generalities. The TL;DR of our work is this:
Apple iMessage, as implemented in versions of iOS prior to 9.3 and Mac OS X prior to 10.11.4, contains serious flaws in the encryption mechanism that could allow an attacker -- who obtains iMessage ciphertexts -- to decrypt the payload of certain attachment messages via a slow but remote and silent attack, provided that one sender or recipient device is online. While capturing encrypted messages is difficult in practice on recent iOS devices, thanks to certificate pinning, it could still be conducted by a nation state attacker or a hacker with access to Apple's servers. You should probably patch now.
For those who want the gory details, I'll proceed with the rest of this post using the "fun" question and answer format I save for this sort of post.
What is Apple iMessage and why should I care?
Those of you who read this blog will know that I have a particular obsession with Apple iMessage. This isn’t because I’m weirdly obsessed with Apple — although it is a little bit because of that. Mostly it's because I think iMessage is an important protocol. The text messaging service, which was introduced in 2011, has the distinction of being the first widely-used end-to-end encrypted text messaging system in the world. 

To understand the significance of this, it's worth giving some background. Before iMessage, the vast majority of text messages were sent via SMS or MMS, meaning that they were handled by your cellular provider. Although these messages are technically encrypted, this encryption exists only on the link between your phone and the nearest cellular tower. Once an SMS reaches the tower, it’s decrypted, then stored and delivered without further protection. This means that your most personal messages are vulnerable to theft by telecom employees or sophisticated hackers. Worse, many U.S. carriers still use laughably weak encryption and protocols that are vulnerable to active interception.

So from a security point of view, iMessage was a pretty big deal. In a single stroke, Apple deployed encrypted messaging to millions of users, ensuring (in principle) that even Apple itself couldn't decrypt their communications. The even greater accomplishment was that most people didn’t even notice this happened — the encryption was handled so transparently that few users are aware of it. And Apple did this at very large scale: today, iMessage handles peak throughput of more than 200,000 encrypted messages per second, with a supported base of nearly one billion devices. 
So iMessage is important. But is it any good?
Answering this question has been kind of a hobby of mine for the past couple of years. In the past I've written about Apple's failure to publish the iMessage protocol, and on iMessage's dependence on a vulnerable centralized key server. Indeed, the use of a centralized key server is still one of iMessage's biggest weaknesses, since an attacker who controls the keyserver can use it to inject keys and conduct man in the middle attacks on iMessage users.

But while key servers are a risk, attacks on a key server seem fundamentally challenging to implement -- since they require the ability to actively manipulate Apple infrastructure without getting caught. Moreover, such attacks are only useful for prospective surveillance. If you fail to substitute a user's key before they have an interesting conversation, you can't recover their communications after the fact.

A more interesting question is whether iMessage's encryption is secure enough to stand up against retrospective decryption attacks -- that is, attempts to decrypt messages after they have been sent. Conducting such attacks is much more interesting than the naive attacks on iMessage's key server, since any such attack would require the existence of a fundamental vulnerability in iMessage's encryption itself. And in 2016 encryption seems like one of those things that we've basically figured out how to get right.

Which means, of course, that we probably haven't.
How does iMessage encryption work?
What we know about the iMessage encryption protocol comes from a previous reverse-engineering effort by a group from Quarkslab, as well as from Apple's iOS Security Guide. Based on these sources, we arrive at the following (simplified) picture of the basic iMessage encryption scheme:


To encrypt an iMessage, your phone first obtains the RSA public key of the person you're sending to. It then generates a random AES key k and encrypts the message with that key using CTR mode. Then it encrypts k using the recipient's RSA key. Finally, it signs the whole mess using the sender's ECDSA signing key. This prevents tampering along the way.

So what's missing here?

Well, the most obviously missing element is that iMessage does not use a Message Authentication Code (MAC) or authenticated encryption scheme to prevent tampering with the message. To simulate this functionality, iMessage simply uses an ECDSA signature formulated by the sender. Naively, this would appear to be good enough. Critically, it's not.

The attack works as follows. Imagine that a clever attacker intercepts the message above and is able to register her own iMessage account. First, the attacker strips off the original ECDSA signature made by the legitimate sender, and replaces it with a signature of her own. Next, she sends the newly signed message to the original recipient using her own account:


The outcome is that the user receives and decrypts a copy of the message, which has now apparently originated from the attacker rather than from the original sender. Ordinarily this would be a pretty mild attack -- but there's a useful wrinkle. In replacing the sender's signature with one of her own, the attacker has gained a powerful capability. Now she can tamper with the AES ciphertext (red) at will.

Specifically, since in iMessage the AES ciphertext is not protected by a MAC, it is therefore malleable. As long as the attacker signs the resulting message with her key, she can flip any bits in the AES ciphertext she wants -- and this will produce a corresponding set of changes when the recipient ultimately decrypts the message. This means that, for example, if the attacker guesses that the message contains the word "cat" at some position, she can flip bits in the ciphertext to change that part of the message to read "dog" -- and she can make this change even though she can't actually read the encrypted message.

Only one more big step to go.

Now further imagine that the recipient's phone will decrypt the message correctly provided that the underlying plaintext that appears following decryption is correctly formatted. If the plaintext is improperly formatted -- for a silly example, our tampering made it say "*7!" instead of "pig" -- then on receiving the message, the recipient's phone might return an error that the attacker can see.

It's well known that such a configuration capability allows our attacker the ability to learn information about the original message, provided that she can send many "mauled" variants to be decrypted. By mauling the underlying message in specific ways -- e.g., attempting to turn "dog" into "pig" and observing whether decryption succeeds -- the attacker can gradually learn the contents of the original message. The technique is known as a format oracle, and it's similar to the padding oracle attack discovered by Vaudenay.
So how exactly does this format oracle work?
The format oracle in iMessage is not a padding oracle. Instead it has to do with the compression that iMessage uses on every message it sends.

You see, prior to encrypting each message payload, iMessage applies a complex formatting that happens to conclude with gzip compression. Gzip is a modestly complex compression scheme that internally identifies repeated strings, applies Huffman coding, then tacks a CRC checksum computed over the original data at the end of the compressed message. It's this gzip-compressed payload that's encrypted within the AES portion of an iMessage ciphertext.

It turns out that given the ability to maul a gzip-compressed, encrypted ciphertext, there exists a fairly complicated attack that allows us to gradually recover the contents of the message by mauling the original message thousands of times and sending the modified versions to be decrypted by the target device. The attack turns on our ability to maul the compressed data by flipping bits, then "fix up" the CRC checksum correspondingly so that it reflects the change we hope to see in the uncompressed data. Depending on whether that test succeeds, we can gradually recover the contents of a message -- one byte at a time.

While I'm making this sound sort of simple, the truth is it's not. The message is encoded using Huffman coding, with a dynamic Huffman table we can't see -- since it's encrypted. This means we need to make laser-specific changes to the ciphertext such that we can predict the effect of those changes on the decrypted message, and we need to do this blind. Worse, iMessage has various countermeasures that make the attack more complex.

The complete details of the attack appear in the paper, and they're pretty eye-glazing, so I won't repeat them here. In a nutshell, we are able to decrypt a message under the following conditions:
  1. We can obtain a copy of the encrypted message
  2. We can send approximately 2^18 (invisible) encrypted messages to the target device
  3. We can determine whether or not those messages decrypted successfully or not
The first condition can be satisfied by obtaining ciphertexts from a compromise of Apple's Push Notification Service servers (which are responsible for routing encrypted iMessages) or by intercepting TLS connections using a stolen certificate -- something made more difficult due to the addition of certificate pinning in iOS 9. The third element is the one that initially seems the most challenging. After all, when I send an iMessage to your device, there's no particular reason that your device should send me any sort of response when the message decrypts. And yet this information is fundamental to conducting the attack!

It turns out that there's a big exception to this rule: attachment messages.
How do attachment messages differ from normal iMessages?
When I include a photo in an iMessage, I don't actually send you the photograph through the normal iMessage channel. Instead, I first encrypt that photo using a random 256-bit AES key, then I compute a SHA1 hash and upload the encrypted photo to iCloud. What I send you via iMessage is actually just an iCloud.com URL to the encrypted photo, the SHA1 hash, and the decryption key.

Contents of an "attachment" message.
When you successfully receive and decrypt an iMessage from some recipient, your Messages client will automatically reach out and attempt to download that photo. It's this download attempt, which happens only when the phone successfully decrypts an attachment message, that makes it possible for an attacker to know whether or not the decryption has succeeded.

One approach for the attacker to detect this download attempt is to gain access to and control your local network connections. But this seems impractical. A more sophisticated approach is to actually maul the URL within the ciphertext so that rather than pointing to iCloud.com, it points to a related URL such as i8loud.com. Then the attacker can simply register that domain, place a server there and allow the client to reach out to it. This requires no access to the victim's local network.

By capturing an attachment message, repeatedly mauling it, and monitoring the download attempts made by the victim device, we can gradually recover all of the digits of the encryption key stored within the attachment. Then we simply reach out to iCloud and download the attachment ourselves. And that's game over. The attack is currently quite slow -- it takes more than 70 hours to run -- but mostly because our code is slow and not optimized. We believe with more engineering it could be made to run in a fraction of a day.

Result of decrypting the AES key for an attachment. Note that the ? represents digits we could not recover for various reasons, typically due to string repetitions. We can brute-force the remaining digits.
The need for an online response is why our attack currently works against attachment messages only: those are simply the messages that make the phone do visible things. However, this does not mean the flaw in iMessage encryption is somehow limited to attachments -- it could very likely be used against other iMessages, given an appropriate side-channel.
How is Apple fixing this?
Apple's fixes are twofold. First, starting in iOS 9.0 (and before our work), Apple began deploying aggressive certificate pinning across iOS applications. This doesn't fix the attack on iMessage crypto, but it does make it much harder for attackers to recover iMessage ciphertexts to decrypt in the first place.

Unfortunately even if this works perfectly, Apple still has access to iMessage ciphertexts. Worse, Apple's servers will retain these messages for up to 30 days if they are not delivered to one of your devices. A vulnerability in Apple Push Network authentication, or a compromise of these servers could read them all out. This means that pinning is only a mitigation, not a true fix.

As of iOS 9.3, Apple has implemented a short-term mitigation that my student Ian Miers proposed. This relies on the fact that while the AES ciphertext is malleable, the RSA-OAEP portion of the ciphertext is not. The fix maintains a "cache" of recently received RSA ciphertexts and rejects any repeated ciphertexts. In practice, this shuts down our attack -- provided the cache is large enough. We believe it probably is.

In the long term, Apple should drop iMessage like a hot rock and move to Signal/Axolotl.
So what does it all mean?
As much as I wish I had more to say, fundamentally, security is just plain hard. Over time we get better at this, but for the foreseeable future we'll never be ahead. The only outcome I can hope for is that people realize how hard this process is -- and stop asking technologists to add unacceptable complexity to systems that already have too much of it.

Tuesday, March 1, 2016

Attack of the week: DROWN

To every thing there is a season. And in the world of cryptography, today we have the first signs of the season of TLS vulnerabilities.

This year's season is off to a roaring start with not one, but two serious bugs announcements by the OpenSSL project, each of which guarantees that your TLS connections are much less than private than you'd like them to be. I can't talk about both vulnerabilities and keep my sanity, so today I'm going to confine myself to the more dramatic of the two vulnerabilities: a new cross-protocol attack on TLS named "DROWN".

Technically DROWN stands for "Decrypting RSA using Obsolete and Weakened eNcryption", but honestly feel free to forget that because the name itself is plenty descriptive. In short, due to a series of dumb mistakes on the part of a vast number of people, DROWN means that TLS connections to a depressingly huge slice of the web (and mail servers, VPNs etc.) are essentially open to attack by fairly modest adversaries.

So that's bad news. The worse news -- as I'll explain below -- is that this whole mess was mostly avoidable.

For a detailed technical explanation of DROWN, you should go read the complete technical paper by Aviram et al. Or visit the DROWN team's excellent website. If that doesn't appeal to you, read on for a high level explanation of what DROWN is all about, and what it means for the security of the web. Here's the TL;DR:
If you're running a web server configured to use SSLv2, and particularly one that's running OpenSSL (even with all SSLv2 ciphers disabled!), you may be vulnerable to a fast attack that decrypts many recorded TLS connections made to that box. Most worryingly, the attack does not require the client to ever make an SSLv2 connection itself, and it isn't a downgrade attack. Instead, it relies on the fact that SSLv2 -- and particularly the legacy "export" ciphersuites it incorporates -- are pure poison, and simply having these active on a server is enough to invalidate the security of all connections made to that device.
For the rest of this post I'll use the "fun" question and answer format I save for this kind of attack. First, some background.
What are TLS and SSLv2, and why should I care?
Transport Layer Security (TLS) is the most important security protocol on the Internet. You should care about it because nearly every transaction you conduct on the Internet relies on TLS (v1.0, 1.1 or 1.2) to some degree, and failures in TLS can flat out ruin your day.

But TLS wasn't always TLS. The protocol began its life at Netscape Communications under the name "Secure Sockets Layer", or SSL. Rumor has it that the first version of SSL was so awful that the protocol designers collected every printed copy and buried them in a secret New Mexico landfill site. As a consequence, the first public version of SSL is actually SSL version 2. It's pretty terrible as well -- but not (entirely) for the reasons you might think.

Let me explain.

Working group last call, SSL version 2.
The reason you might think SSLv2 is terrible is because it was a product of the mid-1990s, which modern cryptographers view as the "dark ages of cryptography". Many of the nastier cryptographic attacks we know about today had not yet been discovered. As a result, the SSLv2 protocol designers were forced to essentially grope their way in the dark, and so were frequently devoured by grues -- to their chagrin and our benefit, since the attacks on SSLv2 offered priceless lessons for the next generation of protocols.

And yet, these honest mistakes are not worst thing about SSLv2. The most truly awful bits stem from the fact that the SSLv2 designers were forced to ruin their own protocol. This was the result of needing to satisfy the U.S. government's misguided attempt to control the export of cryptography. Rather than using only secure encryption, the designers were forced to build in a series of "export-grade ciphersuites" that offered abysmal 40-bit session keys and other nonsense. I've previously written about the effect of export crypto on today's security. Today we'll have another lesson.
Wait, isn't SSLv2 ancient history?
For some time in the early 2000s, SSLv2 was still supported by browsers as a fallback protocol, which meant that active attackers could downgrade an SSLv3 or TLS connection by tricking a browser into using the older protocol. Fortunately those attacks are long gone now: modern web browsers have banished SSLv2 entirely -- along with export cryptography in general. If you're using a recent version of Chrome, IE or Safari, you should never have to worry about accidentally making an SSLv2 connection.

The problem is that while clients (such as browsers) have done away with SSLv2, many servers still support the protocol. In most cases this is the result of careless server configuration. In others, the blame lies with crummy and obsolete embedded devices that haven't seen a software update in years -- and probably never will. (You can see if your server is vulnerable here.)

And then there's the special case of OpenSSL, which helpfully provides a configuration option that's intended to disable SSLv2 ciphersuites -- but which, unfortunately, does no such thing. In the course of their work, the DROWN researchers discovered that even when this option is set, clients may still request arbitrary SSLv2 ciphersuites. (This issue was quietly patched in January. Upgrade.)

The reason this matters is that SSL/TLS servers do a very silly thing. You see, since people don't like to buy multiple certificates, a server that's configured to use both TLS and SSLv2 will generally use the same RSA private key to support both protocols. This means any bugs in the way SSLv2 handles that private key could very well affect the security of TLS.

And this is where DROWN comes in.
So what is DROWN?
DROWN is a classic example of a "cross protocol attack". This type of attack makes use of bugs in one protocol implementation (SSLv2) to attack the security of connections made under a different protocol entirely -- in this case, TLS. More concretely, DROWN is based on the critical observation that while SSLv2 and TLS both support RSA encryption, TLS properly defends against certain well-known attacks on this encryption -- while SSLv2's export suites emphatically do not.

I will try to make this as painless as possible, but here we need to dive briefly into the weeds.

You see, both SSLv2 and TLS use a form of RSA encryption padding known as RSA-PKCS#1v1.5. In the late 1990s, a man named Daniel Bleichenbacher proposed an amazing attack on this encryption scheme that allows an attacker to decrypt an RSA ciphertext efficiently -- under the sole condition that they can ask an online server to decrypt many related ciphertexts, and give back only one bit of information for each one -- namely, the bit representing whether decryption was successful or not.

Bleichenbacher's attack proved particularly devastating for SSL servers, since the standard SSL RSA-based handshake involves the client encrypting a secret (called the Pre-Master Secret, or PMS) under the server's RSA public key, and then sending this value over the wire. An attacker who eavesdrops the encrypted PMS can run the Bleichenbacher attack against the server, sending it thousands of related values (in the guise of new SSL connections), and using the server's error responses to gradually decrypt the PMS itself. With this value in hand, the attacker can compute SSL session keys and decrypt the recorded SSL session.

A nice diagram of the SSL RSA handshake, courtesy Cloudflare (who don't know I'm using it, thanks guys!)
The main SSL/TLS countermeasure against Bleichenbacher's attack is basically a hack. When the server detects that an RSA ciphertext has decrypted improperly, it lies. Instead of returning an error, which the attacker could use to implement the attack, it generates a random pre-master secret and continues with the rest of the protocol as though this bogus value was what it actually decrypted. This causes the protocol to break down later on down the line, since the server will compute essentially a random session key. But it's sufficient to prevent the attacker from learning whether the RSA decryption succeeded or not, and that kills the attack dead.
Anti-Bleichenbacher countermeasure from the TLS 1.2 spec.
Now let's take a moment to reflect and make an observation.

If the attacker sends a valid RSA ciphertext to be decrypted, the server will decrypt it and obtain some PMS value. If the attacker sends the same valid ciphertext a second time, the server will decrypt and obtain the same PMS value again. Indeed, the server will always get the same PMS even if the attacker sends the same valid ciphertext a hundred times in a row.

On the other hand, if the attacker repeatedly sends the same invalid ciphertext, the server will choose a different PMS every time. This observation is crucial.

In theory, if the attacker holds a ciphertext that might be valid or invalid -- and the attacker would like to know which is true -- they can send the same ciphertext to be decrypted repeatedly. This will lead to two possible conditions. In condition (1) where the ciphertext is valid, decryption will produce the "same PMS every time". Condition (2) for an invalid ciphertext will produce a "different PMS each time". If the attacker could somehow tell the difference between condition (1) and condition (2), they could determine whether the ciphertext was valid. That determination alone would be enough to resurrect the Bleichenbacher attack. Fortunately in TLS, the PMS is never used directly; it's first passed through a strong hash function and combined with a bunch of random nonces to obtain a Master Secret. This result then used in further strong ciphers and hash functions. Thanks to the strength of the hash function and ciphers, the resulting keys are so garbled that the attacker literally cannot tell whether she's observing condition (1) or (2).

And here we finally we run into the problem of SSLv2. 

You see, SSLv2 implementations include a similar anti-Bleichenbacher countermeasure. Only here there are some key differences. In SSLv2 there is no PMS -- the encrypted value is used as the Master Secret and employed directly to derive the encryption session key. Moreover, in export modes, the Master Secret may be as short as 40 bits, and used with correspondingly weak export ciphers. This means an attacker can send multiple ciphertexts, then brute-force the resulting short keys. After recovering these keys for a tiny number of sessions, they will be able to determine whether they're in condition (1) or (2). This would effectively resurrect the Bleichenbacher attack.
This still sounds like an attack on SSLv2, not on TLS. What am I missing? 
SSLv2 export ciphers.
And now we come to the full horror of SSLv2.

Since most servers configured with both SSLv2 and TLS support will use the same RSA private key for decrypting sessions from either protocol, a Bleichenbacher attack on the SSLv2 implementation -- with its vulnerable crappy export ciphersuites -- can be used to decrypt the contents of a normal TLS-based RSA ciphertext. After all, both protocols are using the same darned secret key. Due to formatting differences in the RSA ciphertext between the two protocols, this attack doesn't work all the time -- but it does work for approximately one out of a thousand TLS handshakes.

To put things succinctly: with access to a whole hell of a lot of computation, an attacker can intercept a TLS connection, then at their leisure make many thousands of queries to the SSLv2-enabled server, and decrypt that connection. The "general DROWN" attack actually requires watching about 1,000 TLS handshakes to find a vulnerable RSA ciphertext, about 40,000 queries to the server, and about 2^50 offline operations.
LOL. That doesn't sound practical at all. You cryptographers suck. 
First off, that isn't really a question, it's more of a rude statement. But since this is exactly the sort of reaction cryptographers often get when they point out perfectly practical theoretical attacks on real protocols, I'd like to take a moment to push back.

While the attack described above seems costly, it can be conducted in several hours and $440 on Amazon EC2. Are your banking credentials worth $440? Probably not. But someone else's probably are. Given all the things we have riding on TLS, it's better for it not to be broken at all.

More urgently, the reason cryptographers spend time on "impractical attacks" is that attacks always get better. And sometimes they get better fast.

The attack described above is called "General DROWN" and yes, it's a bit impractical. But in the course of writing just this single paper, the DROWN researchers discovered a second variant of their attack that's many orders of magnitude faster than the general one described above. This attack, which they call "Special DROWN" can decrypt a TLS RSA ciphertext in about one minute on a single CPU core.

This attack relies on a bug in the way OpenSSL handles SSLv2 key processing, a bug that was (inadvertently) fixed in March 2015, but remains open across the Internet. The Special DROWN bug puts DROWN squarely in the domain of script kiddies, for thousands of websites across the Internet.
So how many sites are vulnerable?
This is probably the most depressing part of the entire research project. According to wide-scale Internet scans run by the DROWN researchers, more than 2.3 million HTTPS servers with browser-trusted certificates are vulnerable to special DROWN, and 3.5 million HTTPS servers are vulnerable to General DROWN. That's a sizeable chunk of the encrypted web, including a surprising chunk of the Chinese and Colombian Internet.

And while I've focused on the main attacks in this post, it's worth pointing out that DROWN also affects other protocol suites, like TLS with ephemeral Diffie-Hellman and even Google's QUIC. So these vulnerabilities should not be taken lightly.

If you want to know whether your favorite site is vulnerable, you can use the DROWN researchers' handy test.
What happens now?
In January, OpenSSL patched the bug that allows the SSLv2 ciphersuites to remain alive. Last March, the project inadvertently fixed the bug that makes Special DROWN possible. But that's hardly the end. The patch they're announcing today is much more direct: hopefully it will make it impossible to turn on SSLv2 altogether. This will solve the problem for everyone... at least for everyone willing to patch. Which, sadly, is unlikely to be anywhere near enough.

More broadly, attacks like DROWN illustrate the cost of having old, vulnerable protocols on the Internet. And they show the terrible cost that we're still paying for export cryptography systems that introduced deliberate vulnerabilities in encryption so that intelligence agencies could pursue a small short-term advantage -- at the cost of long-term security.

Given that we're currently in the midst of a very important discussion about the balance of short- and long-term security, let's hope that we won't make the same mistake again.

Tuesday, December 22, 2015

On the Juniper backdoor

You might have heard that a few days ago, Juniper Systems announced the discovery of "unauthorized code" in the ScreenOS software that underlies the NetScreen line of devices. As a result of this discovery, the company announced a pair of separate vulnerabilities, CVE-2015-7755 and CVE-2015-7756 and urged their customers to patch immediately.

The first of these CVEs (#7755) was an authentication vulnerability, caused by a malicious hardcoded password in SSH and Telnet. Rapid7 has an excellent writeup of the issue. This is a pretty fantastic vulnerability, if you measure by the impact on security of NetScreen users. But on the technological awesomeness scale it rates about a two out of ten, maybe a step above 'hit the guy with a wrench'.

The second vulnerability is a whole different animal. The advisory notes that CVE-7756 -- which is independent of the first issue -- "may allow a knowledgeable attacker who can monitor VPN traffic to decrypt that traffic." This is the kind of vulnerability that makes applied cryptographers cry tears of joy. It certainly did that for me:
And while every reasonable person knows you can't just drop "passive decryption vulnerability" and expect the world to go on with its business, this is exactly what Juniper tried to do. Since they weren't talking about it, it fell to software experts to try to work out what was happening by looking carefully at firmware released by the company.

Now I want to be clear that I was not one of those software experts. IDA scares the crap out of me. But I'm fortunate to know some of the right kind of people, like Steve Checkoway, who I was able to get on the job, mostly by encouraging him to neglect his professional obligations. I also follow some talented folks on Twitter, like H.D. Moore and Ralf Philipp Weinmann. So I was fortunate enough to watch them work, and occasionally (I think?) chip in a helpful observation.

And yes, it was worth it. Because what Ralf and Steve et al. found is beyond belief. Ralf's excellent post provides all of the technical details, and you should honest just stop reading now and go read that. But since you're still here, the TL;DR is this:
For the past several years, it appears that Juniper NetScreen devices have incorporated a potentially backdoored random number generator, based on the NSA's Dual_EC_DRBG algorithm. At some point in 2012, the NetScreen code was further subverted by some unknown party, so that the very same backdoor could be used to eavesdrop on NetScreen connections. While this alteration was not authorized by Juniper, it's important to note that the attacker made no major code changes to the encryption mechanism -- they only changed parameters. This means that the systems were potentially vulnerable to other parties, even beforehand. Worse, the nature of this vulnerability is particularly insidious and generally messed up.
In the rest of this post I'm going to try to fill in the last part of that statement. But first, some background.

Dual EC DRBG

Pretty much every cryptographic system depends on a secure random number generator, or RNG. These algorithms produce the unpredictable random bits that are consumed by cryptographic protocols. The key word in this description is unpredictable: if an attacker can predict the output of your RNG, then virtually everything you build on it will end up broken.

This fact has not been lost on attackers! The most famous (alleged) example of deliberate random number generator subversion was discovered in 2007 by Dan Shumow and Neils Ferguson from Microsoft, when they announced the possibility of a backdoor in a NIST standard called Dual_EC_DRBG.

I've written extensively about the design of the Dual_EC generator and the process that led to it its standardization. Omitting the mathematics, the short version is that Dual EC relies on a special 32-byte constant called Q, which -- if generated by a malicious attacker -- can allow said attacker to predict future outputs of the RNG after seeing a mere 30 bytes of raw output from your generator.

The NIST specification of Dual_EC comes with a default value for Q that was generated by the NSA. Nobody has ever been able to determine how NSA generated this value, but leaks by Edward Snowden in 2013 provide strong evidence that NSA may have generated it maliciously. It certainly doesn't smell good.

But enough with the ancient history. We're here to talk about Juniper.

Juniper ScreenOS -- before the "unauthorized code"

Although it was not widely publicized before this week, Juniper's ScreenOS devices have used Dual EC for some time -- probably since before Juniper acquired NetScreen Technologies. Prior to this week, the only place you might have learned about this was on this tiny page posted by the company after the Snowden leaks.


The decision to use Dual EC is strange all by itself. But it gets weirder.

First, ScreenOS doesn't use the NSA's default Q. Instead, they use an alternative Q value that was generated by Juniper and/or NetScreen. If Juniper had really wanted to rule out the possibility of surveillance, this would have been a good opportunity to do so. They could have generated the new constant in a "provably" safe way -- e.g., by hashing some English-language string. Since we have no evidence that they did so, a conservative view holds that the Juniper/NetScreen constant may also have been generated maliciously, rendering it useful for eavesdropping.

Next, ScreenOS uses Dual EC in a strange, non-standard way. Rather than generating all of their random numbers with Dual EC (which would be slow), they only use Dual EC to generate a seed for a fast 3DES-based generator called ANSI X9.17. Since that generator is actually FIPS-140 approved and generally believed to be sufficient to the purpose, it's not clear what value Dual EC is really adding to the system in the first place -- except, of course, its usefulness as a potential backdoor.

The good news here is that the post-processing by ANSI X9.17 should kill the Dual EC backdoor, since the attack relies on the attacker seeing raw output from Dual EC. The ANSI generator appears to completely obfuscate this output, thus rendering Dual EC "safe". This is indeed the argument Juniper made in 2013 when it decided to leave the Dual EC code in ScreenOS.

The problem with this argument is that it assumes that no other code could ever "accidentally" exfiltrate a few bytes bit of raw Dual EC output. Yet this is exactly the kind of threat you'd worry about in a deliberately backdoored system -- the threat that, just maybe, the developers are not your friend. Thus Dual EC is safe only if you assume no tiny bug in the code could accidentally leak out 30 bytes or so of raw Dual EC output. If it did, this would make all subsequent seeding calls predictable, and thus render all numbers generated by the system predictable. In general, this would spell doom for the confidentiality of VPN connections.

And unbelievably, amazingly, who coulda thunk it, it appears that such a bug does exist in many versions of ScreenOS, dating to both before and after the "unauthorized code" noted by Juniper. This issue was discovered by Willem Pinckaers and can be illustrated by the following code snippet, which Steve Checkoway decompiled (see full context here):

void prng_generate()
{
  int index; // eax@4
  unsigned int bytes_generated; // ebx@4
  int v2; // eax@9
  int v3; // eax@12
  char v4; // ST04_1@12
  int time[2]; // [sp+10h] [bp-10h]@1

  time[0] = 0;
  time[1] = get_cycles();
  prng_output_index = 0; // this is a global
  ++blocks_generated_since_reseed;
  if ( !do_not_need_to_reseed() )
    // the routine below fills a buffer with raw Dual EC output
    prng_reseed(); // internally sets prng_output_index to 32
  for ( ; (unsigned int)prng_output_index <= 0x1F; prng_output_index += 8 )
  {
    // this loop is supposed to process the same buffer
    // through the ANSI (3DES) generator. However, since the
    // value of prng_output_index was set to 32 above, it never executes
    memcpy(prev_prng_seed, prng_seed, 8);
    memcpy(prev_prng_block, prng_block, 8);
    ANSI_X9_31_generate_block(time, prng_seed, prng_key, prng_block);
    ...


Thus what comes out from this function is 32 bytes of raw Dual EC output, which is all you need to recover the internal Dual EC generator state and predict all future outputs.

So if this was the authorized code, what the hell was the unauthorized code?

The creepiest thing about CVE-2015-7756 is that there doesn't seem to be any unauthorized code. Indeed, what's changed in the modified versions is simply the value of the Q point. According to Ralf this point changed in 2012, presumably to a value that the hacker(s) generated themselves. This would likely have allowed these individuals to passively decrypt ScreenOS VPN sessions.

In the more recent Juniper patch to fix the vulnerability, is simply set back to the the original Juniper/NetScreen value.

The attacker also replaced some test vectors. But that appears to be it.

To sum up, some hacker or group of hackers noticed an existing backdoor in the Juniper software, which may have been intentional or unintentional -- you be the judge! They then piggybacked on top of it to build a backdoor of their own, something they were able to do because all of the hard work had already been done for them. The end result was a period in which someone -- maybe a foreign government -- was able to decrypt Juniper traffic in the U.S. and around the world.

And all because Juniper had already paved the road.

So why does this matter?

For the past several months I've been running around with various groups of technologists, doing everything I can to convince important people that the sky is falling. Or rather, that the sky will fall if they act on some of the very bad, terrible ideas that are currently bouncing around Washington -- namely, that our encryption systems should come equipped with "backdoors" intended to allow law enforcement and national security agencies to access our communications.

One of the most serious concerns we raise during these meetings is the possibility that encryption backdoors could be subverted. Specifically, that a backdoor intended for law enforcement could somehow become a backdoor for people who we don't trust to read our messages. Normally when we talk about this, we're concerned about failures in storage of things like escrow keys. What this Juniper vulnerability illustrates is that the danger is much broader and more serious than that.

The problem with cryptographic backdoors isn't that they're the only way that an attacker can break into our cryptographic systems. It's merely that they're one of the best. They take care of the hard work, the laying of plumbing and electrical wiring, so attackers can simply walk in and change the drapes.

This post made possible by Ralf Philipp Weinmann, H. D. Moore, Steve Checkoway, Willem Pinckaers, Nadia Heninger, Shaanan Cohney and the letter Q.

Wednesday, November 11, 2015

Why the Tor attack matters

Earlier today, Motherboard posted a court document filed in a prosecution against a Silk Road 2.0 user, indicating that the user had been de-anonymized on the Tor network thanks to research conducted by a "university-based research institute".

Source: Motherboard.
As Motherboard pointed out, the timing of this research lines up with an active attack on the Tor network that was discovered and publicized in July 2014. Moreover, the details of that attack were eerily similar to the abstract of a (withdrawn) BlackHat presentation submitted by two researchers at the CERT division of Carnegie Mellon University (CMU).

A few hours later, the Tor Project made the allegations more explicit, posting a blog entry accusing CMU of accepting $1 million to conduct the attack. A spokesperson for CMU didn't exactly deny the allegations, but demanded better evidence and stated that he wasn't aware of any payment. No doubt we'll learn more in the coming weeks as more documents become public.

You might wonder why this is important. After all, the crimes we're talking about are pretty disturbing. One defendant is accused of possessing child pornography, and if the allegations are true, the other was a staff member on Silk Road 2.0. If CMU really did conduct Tor de-anonymization research for the benefit of the FBI, the people they identified were allegedly not doing the nicest things. It's hard to feel particularly sympathetic.

Except for one small detail: there's no reason to believe that the defendants were the only people affected.

If the details of the attack are as we understand them, a group of academic researchers deliberately took control of a significant portion of the Tor network. Without oversight from the University research board, they exploited a vulnerability in the Tor protocol to conduct a traffic confirmation attack, which allowed them to identify Tor client IP addresses and hidden services. They ran this attack for five months, and potentially de-anonymized thousands of users. Users who depend on Tor to protect them from serious harm.

It's quite possible that the CMU researchers exercised strict protocols to ensure that they didn't accidentally de-anonymize innocent bystanders. This would be standard procedure in any legitimate research involving human subjects, particularly research that has the potential to do harm. If the researchers did take such steps, it would be nice to know about them. CMU hasn't even admitted to the scope of the research project, nor have they published any results, so we just don't know.

While most of the computer science researchers I know are fundamentally ethical people, as a community we have a blind spot when it comes to the ethical issues in our field. There's a view in our community that Institutional Review Boards are for medical researchers, and we've somehow been accidentally caught up in machinery that wasn't meant for us. And I get this -- IRBs are unpleasant to work with. Sometimes the machinery is wrong.

But there's also a view that computer security research can't really hurt people, so there's no real reason for sort of ethical oversight machinery in the first place. This is dead wrong, and if we want to be taken seriously as a mature field, we need to do something about it.

We may need different machinery, but we need something. That something begins with the understanding that active attacks that affect vulnerable users can be dangerous, and should never be conducted without rigorous oversight -- if they must be conducted at all. It begins with the idea that universities should have uniform procedures for both faculty researchers and quasi-government organizations like CERT, if they live under the same roof. It begins with CERT and CMU explaining what went on with their research, rather than treating it like an embarrassment to be swept under the rug.

Most importantly, it begins with researchers looking beyond their own research practices. So far the response to the Tor news has been a big shrug. It's wonderful that most of our community is responsible. But none of that matters if we look the other way when others in our community fail to act responsibly.

Wednesday, October 21, 2015

A riddle wrapped in a curve

If you’re looking for a nice dose of crypto conspiracy theorizing and want to read a paper by some very knowledgeable cryptographers, I have just the paper for you. Titled “A Riddle Wrapped in an Enigma” by Neal Koblitz and Alfred J. Menezes, it tackles one of the great mysteries of the year 2015. Namely: why did the NSA just freak out and throw its Suite B program down the toilet?

Koblitz and Menezes are noted cryptographers and mathematicians, together responsible for scores of papers, including a recent series of position papers such as “Another Look at Provable Security” (mentioned previously on this blog) and “The Brave New World of Bodacious Assumptions in Cryptography”. (Yes, that is a real paper title.)

Their latest paper is highly accessible and contains almost no mathematics, so it’s worth your time to go read it now. If you don’t, or can’t handle reading things typeset in LaTeX, I’m going to sum up the gist of their arguments as best I can, then talk about why this all matters.

Let’s start with some background.

What is Suite B, and what do you mean by NSA ‘freaking out’?

For most of the past century, the NSA (and its predecessors) have jealously guarded a monopoly on advanced cryptography. This approach worked reasonably well through the 1980s, then rapidly became unsustainable. There were a couple of reasons for this. First, the PC revolution made it impossible for the NSA to keep encryption out of the hands of ordinary users and industry -- as much as the NSA tried to make this so. Second, Congress got fed up with paying for expensive bespoke computer systems, and decided the DoD should buy its computing equipment off the shelf like everyone else.

The result was the Cryptographic Modernization Program, and Suite B — the first public cryptography standard to include non-classified algorithms certified for encrypting Secret and Top Secret data. Suite B was standardized for industry in the mid/late 1990s, and was notable for its exclusive reliance on (then new) field of elliptic curve cryptography (ECC) for public key encryption and key agreement.

At the time of Suite B’s adoption, ECC was relatively new to the non-classified world. Many industry and academic cryptographers didn’t feel it had been reasonably studied (Koblitz and Menezes’ anecdotes on this are priceless). ECC was noteworthy for using dramatically shorter keys than alternative public-key algorithms such as RSA and “classical” Diffie-Hellman, largely because new sub-exponential attacks that worked in those settings did not seem to have any analogue in the ECC world. 

The NSA pushed hard for adoption. Since they had the best mathematicians, and moreover, clearly had the most to lose if things went south, the standards bodies were persuaded to go along. The algorithms of Suite B were standardized and — aside from a few intellectual property concerns — have been gradually adopted even by the non-classified community.

Then, in August of this year, NSA freaked out.

It was a quiet freakout, as you would expect from the world’s largest spy agency. But it made big waves with cryptographers. Citing concerns about future developments in quantum computing, the Agency updated the Suite B website to announce a rapid transition away from Suite B, and to a new set of quantum-resistant algorithms in the coming years.

Why all the sudden and immediate concern about quantum computers? Well, the NSA obviously isn’t willing to talk about that. But as of 2013, leaked slides from the Snowden cache show no real evidence of any massive quantum breakthroughs on the horizon that would necessitate a rapid transition from ECC.

Which quantum-resistant algorithms are we supposed to move to? Good question. Cryptographers haven’t agreed on any yet. Which makes the timing of this announcement so particularly mysterious.

So let me spell this out: despite the fact that quantum computers seem to be a long ways off and reasonable quantum-resistant replacement algorithms are nowhere to be seen, NSA decided to make this announcement publicly and not quietly behind the scenes. Weirder still, if you haven’t yet upgraded to Suite B, you are now being urged not to. In practice, that means some firms will stay with algorithms like RSA rather than transitioning to ECC at all. And RSA is also vulnerable to quantum attacks.

When reporters finally reached the NSA for comment, the only sound on the line was breaking glass and the sound of digging. (No, I’m kidding about that — but NSA did update their site again just to make it clear they weren’t playing some kind of deeply misguided April Fool’s prank.)

So that’s the background of the Koblitz/Menezes paper. The riddle is: what does the NSA know that we don’t?

No quantum advance?

Koblitz and Menezes exhaustively detail the possible reasons for the NSA’s announcement, ranging from the straightforward — that NSA actually has made a huge advance in developing quantum computers — to the downright bizarre. They even moot the possibility that the NSA is so embarrassed about getting caught tampering with national standards that they’ve just decided to flip the table and act like crazy people.

I’m going to skip most of the details, not because it isn’t interesting, but rather because I’d like to focus on what I think is the most intriguing hypotheses in the paper. Namely, that the NSA isn’t worried about quantum computers at all, but rather, that they’ve made a major advance in classical cryptanalysis of the elliptic curve discrete logarithm problem — and panic is the result.

Let me lay the groundwork. The security of most EC cryptosystems rests on the presumed intractability of a few basic mathematical problems. The most important of these is called the Elliptic Curve Discrete Logarithm Problem (ECDLP). This problem must be supremely hard to solve, or virtually every cryptosystem built on ECC unravels very quickly.

The definition of “hard” is important here. What we mean in the context of ECC is that the best known algorithm for solving the ECDLP requires a number of operations that is fully exponential in the security parameter. In practice, this means we can achieve 128-bit equivalent security using an elliptic curve set in a 256 bit field — which implies similarly-small keys and ciphertexts. To achieve equivalent security with RSA, where sub-exponential algorithms apply, your keys need to be at least 3072 bits (!) long.

But while the ability to use (relatively) tiny elliptic curve points is wonderful for implementers, it leaves no room for error. If NSA’s mathematicians began to make even modest, but sustained advances in the state of the art for solving the ECDLP, it would put the entire field at risk. Beginning with the smallest of the standard curves, P-256, which would now provide less than the required 128-bit security.

Did I mention that as part of the recent announcement, NSA also deprecated P-256

Of course, there’s no reason to believe that classical cryptanalysis is at fault here. It’s possible that NSA really has reason to believe that quantum computing is moving faster than it was in 2013. It’s also possible that they’re just sloppy, and didn’t realize that having a very public freakout about cryptography you just promoted for years might not be such a good idea. But that seems doubtful.

You can read the Koblitz and Menezes paper for their take on all of the alternative hypotheses. I want to take a last few paragraphs to cover a slightly tangential issue, one that’s been the subject of a great deal of panic on the part of our community.

Did the NSA really backdoor the NIST elliptic curves????1??1

One of the nicer points of the Koblitz and Menezes paper is that it tackles the argument that the NSA deliberately backdoored the NIST elliptic curves. 

This argument can be found in various places around the Internet. It originates with accomplished cryptographers who are quite rightly applying skepticism to an oddball and ad hoc process. It then heads off into crazytown. Whether people admit it or not, this concern has been hugely influential in the recent adoption of non-NIST alternative curves by the CFRG and IETF.

The accusation being leveled at NSA can be described simply. It’s known that the NIST pseudorandom curves were generated by the NSA hashing “seeds” through SHA1 to generate the curve parameters. This process was supposed to reassure the public, by eliminating the NSA’s ability to tamper with the parameters at will. However, nobody thought to restrict the NSA’s choice of input seeds — leading to a potential attack where they tried many different random seeds until they found a curve that contained a weakness the NSA could later exploit.

Koblitz and Menezes tackle this argument on grounds of basic common sense, history, and mathematics. The common sense argument, of course, is that you don’t sh*t where you eat. In other words, the NSA wouldn’t deliberately choose weak elliptic curves given that they planned to use them for encrypting Secret and Top Secret data for the next 20 years. 

But common sense convinces nobody. The mathematical argument is a bit more subtle, and may work where self interest didn’t. Roughly speaking, Koblitz and Menezes point out that the NSA, despite their vast computing power, can’t try every possible seed. This is especially true given that they were generating these curves using 1990s equipment, and the process of evaluating (counting points on) an elliptic curve is computationally intensive. Thus, they assume that the NSA could try only test 2^{48} or so different curves for their hypothetical weakness.

By calculating the number of possible curve families, Koblitz and Menezes show that a vast proportion of curves (for P-256, around 2^{209} out of 2^{257}) would have to be weak in order for the NSA to succeed in this attack. The implications of such a large class of vulnerable curves is very bad for the field of ECC. It dwarfs every previous known weak curve class and would call into question the decision to use ECC at all.

In other words, Koblitz and Menezes are saying that if you accept the weak curve hypothesis into your heart, the solution is not to replace the NIST elliptic curves with anything at all, but rather, to leave the building as rapidly as possible and perhaps not shut the door on the way out. No joke.

On the gripping hand, this sounds very much like the plan NSA is currently implementing. Perhaps we should be worried.

Tuesday, September 8, 2015

Let's talk about iMessage (again)

Source: Gizmodo
Yesterday's New York Times carried a story entitled "Apple and other tech companies tangle with U.S. over data access". It's a vague headline that manages to obscure the real thrust of the story, which is that according to reporters at the Times, Apple has not been forced to backdoor their popular encrypted iMessage system. This flies in the face of some rumors to the contrary.

While there's not much new information in here, people on Twitter seem to have some renewed interest in how iMessage works; whether Apple could backdoor it if they wanted to; and whether the courts could force them to. The answers to those questions are respectively: "very well", "absolutely", and "do I look like a national security lawyer?"

So rather than tackle the last one, which nobody seems to know the answer to, I figure it would be informative to talk about the technical issues with iMessage (again). So here we go.

How does iMessage work?

Fundamentally the mantra of iMessage is "keep it simple, stupid". It's not really designed to be an encryption system as much as it is a text message system that happens to include encryption. As such, it's designed to take away most of the painful bits you expect from modern encryption software, and in the process it makes the crypto essentially invisible to the user. Unfortunately, this simplicity comes at some cost to security.

Let's start with the good: Apple's marketing material makes it clear that iMessage encryption is "end-to-end" and that decryption keys never leave the device. This claim is bolstered by their public security documentation as well as outside efforts to reverse-engineer the system. In iMessage, messages are encrypted with a combination of 1280-bit RSA public key encryption and 128-bit AES, and signed with ECDSA under a 256-bit NIST curve. It's honestly kind of ridiculous, but whatever. Let's call it good enough.

iMessage encryption in a nutshell boils down to this: I get your public key, you get my public key, I can send you messages encrypted to you, and you can be sure that they're authentic and really came from me. Everyone's happy.

But here's the wrinkle: where do those public keys come from?

Where do you get the keys?


Key request to Apple's server.
It's this detail that exposes the real weakness of iMessage. To make key distribution 'simple', Apple takes responsibility for handing out your friends' public keys. It does this using a proprietary key server that Apple owns and operates. Your iPhone requests keys from Apple using a connection that's TLS-encrypted, and employs some fancy cryptographic tokens. But fundamentally, it relies on the assumption that Apple is good, and is really going to give you you the right keys for the person you want to talk to.

But this honesty is just an assumption. Since the key lookup is completely invisible to the user, there's nothing that forces Apple to be honest. They could, if inspired, give you a public key of their choosing, one that they hold the decryption key for. They could give you the FBI's key. They could give you Dwayne "The Rock" Johnson's key, though The Rock would presumably be very non-plussed by this.

Indeed it gets worse. Because iMessage is designed to support several devices attached to the same account, each query to the directory server can bring back many keys -- one for each of your devices. An attacker can simply add a device (or a fake 'ghost device') to Apple's key server, and senders will encrypt messages to that key along with the legitimate ones. This enables wiretapping, provided you can get Apple to help you out.

But why do you need Apple to help you out?

As described, this attack doesn't really require direct collaboration from Apple. In principle, the FBI could just guess the target's email password, or reset the password and add a new device all on their own. Even with a simple subpoena, Apple might be forced to hand over security questions and/or password hashes.

The real difficulty is caused by a final security feature in iMessage: when you add a new device, or modify the devices attached to your account, Apple's key server sends a notification to each of the existing devices already to the account. It's not obvious how this feature is implemented, but one thing is clear -- it seems likely that, at least in theory, Apple could shut it off if they needed to.* After all, this all comes down to code in the key server.

Fixing this problem seems hard. You could lock the key server in a giant cage, then throw away the key. But as long as Apple retains the ability to update their key server software, solving this problem seems fundamentally challenging. (Though not impossible -- I'll come back to this in a moment.)

Can governments force Apple to modify their key server?

It's not clear. While it seems pretty obvious that Apple could in theory substitute keys and thus enable eavesdropping, in practice it may require substantial changes to Apple's code. And while there are a few well-known cases in which the government has forced companies to turn over keys, changing the operation of a working system is a whole different ball of wax.

And iMessage is not just any working system. According to Apple, it handles several billion messages every day, and is fundamental to the operation of millions of iPhones. When you have a deployed system at that scale, the last thing you want to do is mess with it -- particularly if it involves crypto code that may not even be well understood by its creators. There's no amount of money you could pay me to be 'the guy who broke iMessage', even for an hour.

Any way you slice it, it's a risky operation. But for a real answer, you'll have to talk to a lawyer.

Why isn't key substitution a good solution to the 'escrow' debate?

Another perspective on iMessage -- one I've heard from some attorney friends -- is that key server tampering sounds like a pretty good compromise solution to the problem of creating a 'secure golden key' (AKA giving governments access to plaintext).

This view holds that key substitution allows only proactive eavesdropping: the government has to show up with a warrant before they can eavesdrop on a customer. They can't spy on everyone, and they can't go back and read your emails from last month. At the same time, most customers still get true 'end to end' encryption.

I see two problems with this view. First, tampering with the key server fundamentally betrays user trust, and undermines most of the guarantees offered by iMessage. Apple claims that they offer true end-to-end encryption that they can't read -- and that's reasonable in the threat model they've defined for themselves. The minute they start selectively substituting keys, that theory goes out the window. If you can substitute a few keys, why not all of them? In this world, Apple should expect requests from every Tom, Dick and Harry who wants access to plaintext, ranging from divorce lawyers to foreign governments.

A snapshot of my seven (!) currently enrolled iMessage
devices, courtesy Frederic Jacobs.
The second, more technical problem is that key substitution is relatively easy to detect. While Apple's protocols are an obfuscated mess, it is at least in theory possible for users to reverse-engineer them to view the raw public keys being transmitted -- and thus determine whether the key server is being honest. While most criminals are not this sophisticated, a few are. And if they aren't sophisticated, then tools can be built to make this relatively easy. (Indeed, people have already built such tools -- see my key registration profile at right.)

Thus key substitution represents at most a temporary solution to the 'government access' problem, and one that's fraught with peril for law enforcement, and probably disastrous for the corporations involved. It might seem tempting to head down this rabbit hole, but it's rabbits all the way down.

What can providers do to prevent key substitution attacks?
Signal's "key fingerprint" screen.

From a technical point of view, there are a number of things that providers can do to harden their key servers. One is to expose 'key fingerprints' to users who care, which would allow them to manually compare the keys they receive with the keys actually registered by other users. This approach is used by OpenWhisperSystems' Signal, as well as PGP. But even I acknowledge that this kind of stinks.

A more user-friendly approach is to deploy a variant of Certificate Transparency, which requires providers to publish a publicly verifiable proof that every public key they hand out is being transmitted to the whole world. This allows each client to check that the server is handing out the actual keys they registered -- and by implication, that every other user is seeing the same thing.

The most complete published variant of this is called CONIKS, and it was proposed by a group at Princeton, Stanford and the EFF (one of the more notable authors is Ed Felten, now Deputy U.S. Chief Technology Officer). CONIKS combined key transparency with a 'verification protocol' that allows clients to ensure that they aren't being sidelined and fed false information.

CONIKS isn't necessarily the only game in town when it comes to preventing key substitution attacks, but it represents a powerful existence proof that real defenses can be mounted. Even though Apple hasn't chosen to implement CONIKS, the fact that it's out there should be a strong disincentive for law enforcement to rely heavily on this approach.

So what next?

That's the real question. If we believe the New York Times, all is well -- for the moment. But not for the future. In the long term, law enforcement continues to ask for an approach that allows them to access the plaintext of encrypted messages. And Silicon Valley continues to find new ways to protect the confidentiality of their user's data, against a range of threats beginning in Washington and proceeding well beyond.

How this will pan out is anyone's guess. All we can say is that it will be messy.

Notes:

* How they would do this is really a question for Apple. The feature may involve the key server sending an explicit push message to each of the devices, in which case it would be easy to turn this off. Alternatively, the devices may periodically retrieve their own keys to see what Apple's server is sending out to the world, and alert the user when they see a new one. In the latter case, Apple could selectively transmit a doctored version of the key list to the device owner.

Sunday, August 16, 2015

The network is hostile

Yesterday the New York Times and ProPublica posted a lengthy investigation based on leaked NSA documents, outlining the extensive surveillance collaboration between AT&T and the U.S. government. This surveillance includes gems such as AT&T's assistance in tapping the main fiber connection supporting the United Nations, and that's only the start.

The usual Internet suspects are arguing about whether this is actually news. The answer is both yes and no, though I assume that the world at large will mostly shrug at this point. After all, we've learned so much about the NSA's operations at this point that we're all suffering from revelation-fatigue. It would take a lot to shock us now.

But this isn't what I want to talk about. Instead, the effect of this story was to inspire me to look back on the NSA leaks overall, to think about what they've taught us. And more importantly -- what they mean for the design of the Internet and our priorities as security engineers. That's what I'm going to ruminate about below.

The network is hostile

Anyone who has taken a network security class knows that the first rule of Internet security is that there is no Internet security. Indeed, this assumption is baked into the design of the Internet and most packet-switched networks -- systems where unknown third parties are responsible for handling and routing your data. There is no way to ensure that your packets will be routed as you want them, and there's absolutely no way to ensure that they won't be looked at.

Indeed, the implications of this were obvious as far back as ARPANET. If you connect from point A to point B, it was well known that your packets would traverse untrusted machines C, D and E in between. In the 1970s the only thing preserving the privacy of your data was a gentleman's agreement not to peek. If that wasn't good enough, the network engineers argued, you had to provide your own security between the endpoints themselves.

My take from the NSA revelations is that even though this point was 'obvious' and well-known, we've always felt it more intellectually than in our hearts. Even knowing the worst was possible, we still chose to believe that direct peering connections and leased lines from reputable providers like AT&T would make us safe. If nothing else, the NSA leaks have convincingly refuted this assumption.

We don't encrypt nearly enough

The most surprising lesson of the NSA stories is that 20 years after the development of SSL encryption, we're still sending vast amounts of valuable data in the clear.

Even as late as 2014, highly vulnerable client-to-server connections for services like Yahoo Mail were routinely transmitted in cleartext -- meaning that they weren't just vulnerable to the NSA, but also to everyone on your local wireless network. And web-based connections were the good news. Even if you carefully checked your browser connections for HTTPS usage, proprietary extensions and mobile services would happily transmit data such as your contact list in the clear. If you noticed and shut down all of these weaknesses, it still wasn't enough -- tech companies would naively transmit the same data through vulnerable, unencrypted inter-datacenter connections where the NSA could scoop them up yet again.

There is a view in our community that we're doing much better now, and to some extent we may be. But I'm less optimistic. From an attacker's point of view, the question is not how much we're encrypting, but rather, which valuable scraps we're not protecting. As long as we tolerate the existence of unencrypted protocols and services, the answer is still: way too much.

It's the metadata, stupid

Even if we, by some miracle, manage to achieve 100% encryption of communications content, we still haven't solved the whole problem. Unfortunately, today's protocols still leak a vast amount of useful information via session metadata. And we have no good strategy on the table to defend against it.

Examples of metadata leaked by today's protocols include protocol type, port number, and routing information such as source and destination addresses. It also includes traffic characteristics, session duration, and total communications bandwidth. Traffic analysis remains a particular problem: even knowing the size of the files requested by a TLS-protected browser connection can leak a vast amount of information about the user's browsing habits.

Absolutely none of this is news to security engineers. The problem is that there's so little we can do about it. Anonymity networks like Tor protect the identity of endpoints in a connection, but they do so at a huge cost in additional bandwidth and latency -- and they offer only limited protection in the face of a motivated global adversary. IPSec tunnels only kick the can to a different set of trusted components that themselves can be subverted.

'Full take' culture

Probably the most eye-opening fact of the intelligence leaks is the sheer volume of data that intelligence agencies are willing to collect. This is most famously exemplified by the U.S. bulk data collection and international call recording programs -- but for network engineers the more worrying incarnation is "full take" Internet collection devices like TEMPORA.

If we restrict our attention purely to the collection of such data -- rather than how it's accessed -- it appears that the limiting factors are almost exclusively technical in nature. In other words, the amount of data collected is simply a function of processing power, bandwidth and storage. And this is bad news for our future.

That's because while meaningful human communication bandwidth (emails, texts, Facebook posts, Snapchats) continues to increase substantially, storage and processing power increase faster. With some filtration, and no ubiquitous encryption, 'full take' is increasingly going to be the rule rather than the exception.

We've seen the future, and it's not American

Even if you're not inclined to view the NSA as an adversary -- and contrary to public perception, that view is not uniform even inside Silicon Valley -- America is hardly the only intelligence agency capable of subverting the global communications network. Nations like China are increasingly gaining market share in telecommunications equipment and services, especially in developing parts of the world such as Africa and the Middle East.

While it's cheap to hold China out as some sort of boogeyman, it's significant that someday a large portion of the world's traffic will flow through networks controlled by governments that are, at least to some extent, hostile to the core values of Western democracies.

If you believe that this is the future, then the answer certainly won't involve legislation or politics. The NSA won't protect us through cyber-retaliation or whatever plan is on the table today. If you're concerned about the future, then the answer is to finally, truly believe our propaganda about network trust. We need to learn to build systems today that can survive such an environment. Failing that, we need to adjust to a very different world.