Your vehicle ignition system is the result of a Darwinian arms race between the people who build cars — and those who like to drive them.
|Vehicle ignition switches through history. From left: vintage floor-mounted ignition button; 60s-era cylinder lock; 90s/2000-era high security key; 2011 dash-mounted ignition button.|
The very first electronic vehicle ignition was nothing more than a switch that completed an electrical circuit. This worked fine in small towns and out on the farm, but things weren’t so simple in the big city. So manufacturers adapted, first adding a mechanical lock cylinder then hardening the wiring. This worked for a while, until inevitably the thieves got smarter. Worse, at this point the answer wasn’t so obvious: ignition lock technology stagnated. By the late 1980s and early 1990s, vehicle theft was a multi-billion dollar industry.
A few luxury manufacturers tried to improve the physical security of their locks using high-security keys and non-standard templates. But for most manufacturers, however, there was already a more promising approach at hand. Cars themselves were becoming reliant on microcontrollers for engine control. Why not use a digital lock?
The result is the vehicle immobilizer. A typical first-gen immobilizer used a small chip embedded into the head of the car key. This chip had a single purpose: when the driver inserted the key into the lock cylinder, it would squirt out a code (or serial number), which could be received by an antenna in the lock cylinder. If this code matched what the vehicle expected to see, the engine would start. Expressed as a protocol, the transaction looked like this:
Immobilizers effectively shut down traditional hotwiring and lock-picking. But they had a fatal flaw that criminals soon discovered. Since the code never changed, someone with the right equipment could eavesdrop on the communication (or borrow your keys), and later replay it to the car. This sounds complicated, but quickly became practical thanks to inexpensive devices called “code-grabbers”.
Once again manufacturers adapted. The next generation of immobilizers dropped the fixed key in favor of a simple challenge/response authentication protocol. In this approach, the immobilizer chip and car share a cryptographic key of some sort. When you insert your car key, the car generates a random “challenge” number and sends it to the key. The chip in your car key uses the cryptographic key to compute a response based on the challenge. This tops code-grabbers, since the key itself never goes over the air, and the challenge always changes.
|Challenge response protocol between vehicle and immobilizer key. The key and car share a deterministic cryptographic algorithm F() and a secret key. The car computes F(key, challenge) and compares it to the response value.|
40 bits of personal history
|Two DST form factors. The big one is a Mobil Speedpass, which also relies on the DST technology.|
The DST uses exactly the challenge-response protocol I describe above. The reader (car) sends it a 40 bit challenge, the DST encrypts that value with its cipher, truncates the result down to a 24 bit response, ships it back. The car also has a copy of the secret key which it uses to verify the response.
The problem with the DST is not the protocol. Rather, it’s the number I mentioned above: 40. As in 40-bit key length. If an adversary — say, a malicious parking attendant — borrows your car key, she can issue a challenge to the chip. After collecting the response, she can, at her leisure, test every single one of the approximately 1.1 trillion possible Immobilizer keys until they find one where F(key, challenge) is equal to the response they got from your DST chip.** This sounds hard, but it only takes a few hours on an FPGA.
This process is called “cloning”. It’s not the scariest attack since, in general, it requires the adversary to get your car key, or at least get close enough to scan it.
|DST+ Mutual Authentication protocol. From a presentation in the Fourth Conference on the Advanced Encryption Standard (AES) (2004).|
All’s well that ends well?
A first observation is that the DST+ protocol only protects against challenges sent by an unauthorized reader. If our valet can eavesdrop on the communication between the DST+ and the legitimate reader in the car, he can still obtain a (challenge, response) pair. Since these values are identical to those in the original DST protocol, the same attacks apply. He can use an FPGA to brute force the 40-bit Immobilizer key.
Here’s something else. Once he’s got the car’s Immobilizer key, he can go back and find the Mutual Authentication Key (MAK). Given the challenge sent by the car, along with the 24-bit “additional authentication” string, he can:
- compute I = F(Immobilizer key, challenge),
- use the FPGA to test every single possible MAK value
- stop when he finds a MAK value s.t. F(MAK, I) matches the “additional authentication”.
Into hypothetical territory
Yet imagine… What if a car manufacturer made a tiny mistake. What if, speaking hypothetically, the manufacturer decided to use a single MAK across many different cars — say, every 2009 Toyota Camry? A tiny, harmless optimization.
We know that knowledge of the Immobilizer Key makes it easy to find the car’s MAK. But this works the other way, too. If many cars share a MAK, then anyone who knows that value can use it to derive the Immobilizer key for a car.
Even better (or worse, depending on your point of view) our attacker can do this without ever seeing the car key at all. All you need is a challenge value and “additional authentication” value, both of which the car will give to you. The owner can be fast asleep with his keys safe on his nightstand next to him. You’re outside stealing his car.
So in other words, if you use the DST+ mutual authentication key, and make the small mistake of re-using a MAK across multiple vehicles, you’ve transformed a mild key cloning attack into something much worse. People can now steal your car even without scanning your key.
Please keep in mind that all of this is hypothetical and speculative. But the re-use of a MAK key could happen. There’s evidence that it may have, at least in the past. What it goes to show that if you’re not very careful about your goals and security properties, protocols can do unexpected things. They can make you less secure.
Rolling it up
These posts were not intended to be an in-depth tutorial on the mysteries of protocol design and analysis. I do hope to talk about that more in the future. So far we’ve barely scratched the surface of what can go wrong in a cryptographic protocol. And these are certainly not the best examples of “bad” protocols.
Instead, the purpose of this discussion was to provide a couple of case studies involving real protocols whose failure has implications for millions of people. It was also to show you how tiny changes to a protocol can have a significant impact.
In the next few installment of this overview series we’ll look a bit at hardware, physical security, and the kind of things that can go wrong even when you build the best machines with the best intentions.