Dear Apple: Please set iMessage free

Normally I avoid complaining about Apple because (a) there are plenty of other people carrying that flag, and (b) I honestly like Apple and own numerous lovely iProducts. I’m even using one to write this post.

Moroever, from a security point of view, there isn’t that much to complain about. Sure, Apple has a few irritating habits — shipping old, broken versions of libraries in its software, for example. But on the continuum of security crimes this stuff is at best a misdemeanor, maybe a half-step above ‘improper baby naming‘. Everyone’s software sucks, news at 11.

There is, however, one thing that drives me absolutely nuts about Apple’s security posture. You see, starting about a year ago Apple began operating one of the most widely deployed encrypted text message services in the history of mankind. So far so good. The problem is that they still won’t properly explain how it works.

And nobody seems to care.

I am, of course, referring to iMessage, which was deployed last year in iOS Version 5. It allows — nay, encourages — users to avoid normal carrier SMS text messages and to route their texts through Apple instead.

Now, this is not a particularly new idea. But iMessage is special for two reasons. First it’s built into the normal iPhone texting application and turned on by default. When my Mom texts another Apple user, iMessage will automatically route her message over the Internet. She doesn’t have to approve this, and honestly, probably won’t even know the difference.

Secondly, iMessage claims to bring ‘secure end-to-end encryption‘ (and authentication) to text messaging. In principle this is huge! True end-to-end encryption should protect you from eavesdropping even by Apple, who carries your message. Authentication should protect you from spoofing attacks. This stands in contrast to normal SMS which is often not encrypted at all.

So why am I looking a gift horse in the mouth? iMessage will clearly save you a ton in texting charges and it will secure your messages for free. Some encryption is better than none, right?

Well maybe.

To me, the disconcerting thing about iMessage is how rapidly it’s gone from no deployment to securing billions of text messages for millions of users. And this despite the fact that the full protocol has never been published by Apple or (to my knowledge) vetted by security experts. (Note: if I’m wrong about this, let me know and I’ll eat my words.)

What’s worse is that Apple has been hyping iMessage as a secure protocol; they even propose it as a solution to some serious SMS spoofing bugs. For example:

Apple takes security very seriously. When using iMessage instead of SMS, addresses are verified which protects against these kinds of spoofing attacks. One of the limitations of SMS is that it allows messages to be sent with spoofed addresses to any phone, so we urge customers to be extremely careful if they’re directed to an unknown website or address over SMS.

And this makes me nervous. While iMessage may very well be as secure as Apple makes it out to be, there are plenty of reasons to give the protocol a second look.

For one thing, it’s surprisingly complicated.

iMessage is not just two phones talking to each other with TLS. If this partial reverse-engineering of the protocol (based on the MacOS Mountain Lion Messages client) is for real, then there are lots of moving parts. TLS. Client certificates. Certificate signing requests. New certificates delivered via XML. Oh my.

As a general rule, lots of moving parts means lots of places for things to go wrong. Things that could seriously reduce the security of the protocol. And as far as I know, nobody’s given this much of  a look. It’s surprising.

Moreover, there are some very real questions about what powers Apple has when it comes to iMessage. In principle ‘end-to-end’ encryption should mean that only the end devices can read the connection. In practice this is almost certainly not the case with iMessage. A quick glance at the protocol linked above is enough to tell me that Apple operates as a Certificate Authority for iMessage devices. And as a Certificate Authority, it may be able to substantially undercut the security of the protocol. When would Apple do this? How would it do this? Are we allowed to know?

Finally, there have been several reports of iMessages going astray and even being delivered to the wrong (or stolen) devices. This stuff may all have a reasonable explanation, but it’s yet another set of reasons why we it would be nice to understand iMessage better than we do now if we’re going to go around relying on it.

So what’s my point with all of this?

This is obviously not a technical post. I’m not here to present answers, which is disappointing. If I knew the protocol maybe I’d have some. Maybe I’d even be saying good things about it.

Rather, consider this post as a plea for help. iMessage is important. People use it. We ought to know how secure it is and what risks those people are taking by using it. The best solution would be for Apple to simply release a detailed specification for the protocol — even if they need to hold back a few key details. But if that’s not possible, maybe we in the community should be doing more to find out.

Remember, it’s not just our security at stake. People we know are using these products. It would be awfully nice to know what that means.

iCloud: Who holds the key?

Ars Technica brings us today’s shocking privacy news: ‘Apple holds the master decryption key when it comes to iCloud security, privacy‘. Oh my.

The story is definitely worth a read, though it may leave you shaking your head a bit. Ars’s quoted security experts make some good points, but they do it in a strange way — and they propose some awfully questionable fixes.

But maybe I’m too picky. To be honest, I didn’t realize that there was even a question about who controlled the encryption key to iCloud storage. Of course Apple does — for obvious technical reasons that I’ll explain below. You don’t need to parse Apple’s Terms of Service to figure this out, which is the odd path that Ars’s experts have chosen:

In particular, Zdziarski cited particular clauses of iCloud Terms and Conditions that state that Apple can “pre-screen, move, refuse, modify and/or remove Content at any time” if the content is deemed “objectionable” or otherwise in violation of the terms of service. Furthermore, Apple can “access, use, preserve and/or disclose your Account information and Content to law enforcement authorities” whenever required or permitted by law.

Well, fine, but so what — Apple’s lawyers would put stuff like this into their ToS even if they couldn’t access your encrypted content. This is what lawyers do. These phrases don’t prove that Apple can access your encrypted files (although, I remind you, they absolutely can), any more than Apple’s patent application for a 3D LIDAR camera ‘proves’ that you’re going to get one in your iPhone 5.

Without quite realizing what I was doing, I managed to get myself into a long Twitter-argument about all this with the Founder & Editor-in-Chief of Ars, a gentleman named Ken Fisher. I really didn’t mean to criticize the article that much, since it basically arrives at the right conclusions — albeit with a lot of nonsense along the way.

Since there seems to be some interest in this, I suppose it’s worth a few words. This may very well be the least ‘technical’ post I’ve ever written on this blog, so apologies if I’m saying stuff that seems a little obvious. Let’s do it anyway.

The mud puddle test

You don’t have to dig through Apple’s ToS to determine how they store their encryption keys. There’s a much simpler approach that I call the ‘mud puddle test’:

  1. First, drop your device(s) in a mud puddle.
  2. Next, slip in said puddle and crack yourself on the head. When you regain consciousness you’ll be perfectly fine, but won’t for the life of you be able to recall your device passwords or keys.
  3. Now try to get your cloud data back.

Did you succeed? If so, you’re screwed. Or to be a bit less dramatic, I should say: your cloud provider has access to your ‘encrypted’ data, as does the government if they want it, as does any rogue employee who knows their way around your provider’s internal policy checks.

And it goes without saying: so does every random attacker who can guess your recovery information or compromise your provider’s servers.

Now I realize that the mud puddle test doesn’t sound simple, and of course I don’t recommend that anyone literally do this — head injuries are no fun at all. It’s just a thought experiment, or in the extreme case, something you can ‘simulate’ if you’re willing to tell your provider few white lies.

But you don’t need to simulate it in Apple’s case, because it turns out that iCloud is explicitly designed to survive the mud puddle test. We know this thanks to two iCloud features. These are (1) the ability to ‘restore’ your iCloud backups to a brand new device, using only your iCloud password, and (2) the ‘iForgot’ service, which lets you recover your iCloud password by answering a few personal questions.

Since you can lose your device, the key isn’t hiding there. And since you can forget your password, it isn’t based on that. Ergo, your iCloud data is not encrypted end-to-end, not even using your password as a key (or if it is, then Apple has your password on file, and can recover it from your security questions.) (Update: see Jonathan Zdziarski’s comments at the end of this post.)

You wanna make something of it?

No! It’s perfectly reasonable for a consumer cloud storage provider to design a system that emphasizes recoverability over security. Apple’s customers are far more likely to lose their password/iPhone than they are to be the subject of a National Security Letter or data breach (hopefully, anyway).

Moreover, I doubt your median iPhone user even realizes what they have in the cloud. The iOS ‘Backup’ service doesn’t advertise what it ships to Apple (though there’s every reason to believe that backed up data includes stuff like email, passwords, personal notes, and those naked photos you took.) But if people don’t think about what they have to lose, they don’t ask to secure it. And if they don’t ask, they’re not going to receive.

My only issue is that we have to have this discussion in the first place. That is, I wish that companies like Apple could just come right out and warn their users: ‘We have access to all your data, we do bulk-encrypt it, but it’s still available to us and to law enforcement whenever necessary’. Instead we have to reverse-engineer it by inference, or by parsing through Apple’s ToS. That shouldn’t be necessary.

But can’t we fix this with Public-Key Encryption/Quantum Cryptography/ESP/Magical Unicorns?

No, you really can’t. And this is where the Ars Technica experts go a little off the rails. Their proposed solution is to use public-key encryption to make things better. Now this is actually a great solution, and I have no objections to it. It just won’t make things better.

To be fair, let’s hear it in their own words:

First, cloud services should use asymmetric public key encryption. “With asymmetric encryption, the privacy and identity of each individual user” is better protected, Gulri said, because it uses one “public” key to encrypt data before being sent to the server, and uses another, “private” key to decrypt data pulled from the server. Assuming no one but the end user has access to that private key, then no one but the user—not Apple, not Google, not the government, and not hackers—could decrypt and see the data.

I’ve added the boldface because it’s kind of an important assumption.

To make a long story short, there are two types of encryption scheme. Symmetric encryption algorithms have a single secret key that is used for both encryption and decryption. The key can be generated randomly, or it can be derived from a password. What matters is that if you’re sending data to someone else, then both you and the receiver need to share the same key.

Asymmetric, or public-key encryption has two keys, one ‘public key’ for encryption, and one secret key for decryption. This makes it much easier to send encrypted data to another person, since you only need their public key, and that isn’t sensitive at all.

But here’s the thing: the difference between these approaches is only related to how you encrypt the data. If you plan to decrypt the data — that is, if you ever plan to use it — you still need a secret key. And that secret key is secret, even if you’re using a public-key encryption scheme.

Which brings us to the real problem with all encrypted storage schemes: someone needs to hold the secret decryption key. Apple has made the decision that consumers are not in the best position to do this. If they were willing to allow consumers to hold their decryption keys, it wouldn’t really matter whether they were using symmetric or public-key encryption.

So what is the alternative?

Well, for a consumer-focused system, maybe there really isn’t one. Ultimately people back up their data because they’re afraid of losing their devices, which cuts against the idea of storing encryption keys inside of devices.

You could take the PGP approach and back up your decryption keys to some other location (your PC, for example, or a USB stick). But this hasn’t proven extremely popular with the general public, because it’s awkward — and sometimes insecure.

Alternatively, you could use a password to derive the encryption/decryption keys. This approach works fine if your users pick decent passwords (although they mostly won’t), and if they promise not to forget them. But of course, the convenience of Apple’s “iForgot” service indicates that Apple isn’t banking on users remembering their passwords. So that’s probably out too.

In the long run, the answer for non-technical users is probably just to hope that Apple takes good care of your data, and to hope you’re never implicated in a crime. Otherwise you’re mostly out of luck. For tech-savvy users, don’t use iCloud and do try to find a better service that’s willing to take its chances on you as the manager of your own keys.

In summary

I haven’t said anything in this post that you couldn’t find in Chapter 1 of an ‘Intro to Computer Security’ textbook, or a high-level article on Wikipedia. But these are important issues, and there seems to be confusion about them.

The problem is that the general tech-using public seems to think that cryptography is a magical elixir that can fix all problems. Companies — sometimes quite innocently — market ‘encryption’ to convince people that they’re secure, when in fact they’re really not. Sooner or later people will figure this out and things will change, or they won’t and things won’t. Either way it’ll be an interesting ride.

Update 4/4: Jonathan Zdziarski ‏tweets to say my ‘mud puddle’ theory is busted: since the iForgot service requires you to provide your birthdate and answer a ‘security question’, he points out that this data could be used as an alternative password, which could encrypt your iCloud password/keys — protecting them even from Apple itself.

The problem with his theory is that security answers don’t really make very good keys, since (for most users) they’re not that unpredictable. Apple could brute-force their way through every likely “hometown” or “favorite sport” in a few seconds. Zdziarski suggests that Apple might employ a deliberately slow key derivation function to make these attacks less feasible, and I suppose I agree with him in theory. But only in theory. Neither Zdziarski or I actually believe that Apple does any of this.