Photo by Flickr user Images_of_Money, used under a CC license. |

You see, data is data. If you can encode it as a file then we can tell you how to encrypt it. There's rarely any reason to develop special domain-specific encryption algorithms, like image encryptors. If you're developing such an algorithm then chances are you're doing it to implement some sort of goofy DRM scheme, rather than to provide serious cryptographic protection.

However, once in a blue moon, you do come across a case where you legitimately

*need*an encryption algorithm that works with specific data types.

For example, let's say I want to encrypt my credit card number. In principle this is no problem: card numbers are just numbers, after all.

Assuming that we want a general technique*, I could simply represent my 16-digit VISA card number as an integer between 0 and 9,999,999,999,999,999. In binary this value will require up to 54 bits to store (since lg 10,000,000,000,000,000 = 53.1508495).

In the absolute 'best case' -- assuming that we don't need to pad the message or add an IV (for example, if we're using a stateful stream cipher, like AES-CTR),

*and*we don't care about authentication, we can encrypt this value without expansion. Thus, the ciphertext will be a randomish 54-bit string. Not bad at all.

But here's a question: what if our software can't handle a 54-bit string

*?*

No, this isn't crazy! For example, let's say we need to store that encrypted card number in some legacy retail system that only accepts standard 16-digit decimal card numbers. Or alternatively, we want to transmit it through a normal payment processing network.** For both of these applications, we may need to first translate our ciphertext back into something that

*looks like*a credit card number.

And this can be a problem. Certainly, the naive solution doesn't work. If you've properly encrypted the card number, it should now be a uniformly distributed 54-bit string, which encodes to an integer between 0 and 18,014,398,509,481,983. Too many digits!

Moreover, there are applications where you might require additional properties from your encryption scheme. For example, it might be necessary to

*search*a database for a particular card number. One way to do this is to use a deterministic encryption scheme, one that's a permutation on the message space. This ensures that a given card number X will

*always*encrypt to a unique number Y, which makes searching and comparison really easy.

If we didn't care about the size of the ciphertext, we might be able to get away with encrypting the credit card number as a single block in ECB mode. Block ciphers are permutations, so this would satisfy the requirements just above.***

Unfortunately, this

*even further*exacerbates our original problem. The block size for AES is 128 bits, even poor old DES is 64 bits. If we use ECB mode to encrypt, that's how long the resulting ciphertext will be. We'll never be able to encode something that long back to a 16-digit decimal number.

So what to do?

**Format-Preserving Encryption**

Fortunately there's an answer to these problems, and it goes by the name of "Format-Preserving Encryption", or FPE. As the name implies, the goal of a Format-Preserving Encryption scheme is to securely encrypt while

*preserving*the original formatting of the plaintext data. That means, in a nutshell, that our 16-digit card number can encrypt to a 16-digit number, no problem at all.

The seminal work in this area was written by Black and Rogaway back in 2002. Right off the bat, these guys made three very important observations:

- There are all sorts of applications where you need to preserve the format of input data.
- You could design your own cipher to do it, but it would probably suck.****
- It would sure be nice if we could do it securely using
*standard*ciphers, like AES.

*can*do it with standard ciphers. And better yet, you can do it fairly efficiently.

**Two Problems**

What Black and Rogaway noticed is that there are really two problems here. The first is to build a cipher that works on arbitrary

*bit*lengths. The second problem is to use such a cipher to encrypt arbitrary sets, like the set of all people named "Gino" who live in Atlanta. Or, more usefully, the set of integers from 0 to 9,999,999,999,999,999.

Even better, they pointed out, the first problem has already been solved. Various people, starting with Luby and Rackoff, had shown how to use a pseudorandom function or permutation (e.g., AES) to build ciphers of arbitrary block size. This is usually accomplished with a Feistel network construction like the one used in the DES cipher.

The basic idea is illustrated in the diagram below. To build a cipher that operates on 54-bit blocks, for example, we would cut the input block input block into two 27-bit halves, L0 and R0, and push them through a Feistel network for several rounds. This is an amazingly simple construction; it requires only XOR and some non-linear function F(), which we'll implement using an existing cipher like AES (with truncation of the output).

Note that each call to AES must use a different key, although we can derive these all from one single key using standard techniques. Since AES is much better than the DES S-boxes, we won't need to run the network for 16 rounds. Three or four rounds is probably sufficient.*****

*too*big.

If we encrypt a card number with such a cipher (in ECB mode), we'll end up with a randomly-distributed 54-bit ciphertext that might not easily translate back into a 16-digit integer. We still need something more.

**Cycling**

Black and Rogaway had a few ideas on how to deal with this. One of their approaches is something called 'cycling', which is so astonishingly straightforward that you'd assume it doesn't work. It does!

Here's how it works. Let's say I want to encrypt my Visa card number ("4532294977918448"). I'll follow these steps:

- First, I'll encode the number as 54-bit integer, then encrypt it using the fancy 54-bit block cipher I built above.

- At the end of this process I should have 54-bit ciphertext C. Now, there's a very good chance that, represented as an integer, C will be 'small' enough to represent as a 16-digit integer. If so, then I'm done. That's my output!

- But what if C is too large? No problem. If it first you don't succeed, just try again. Take that first ciphertext C, and encrypt it
*again*using the blockcipher to get a*new*C. Go back to step 2.

I'll let you work out the decryption algorithm.

Now, the pedantic among you will observe that technically speaking, this encryption process could put you in

However, if we assume that the cipher behaves 'randomly', then we can easily get an idea of how likely we are to be satisfied at stage (2). Roughly speaking, there's a 55% probability that we'll get it right each time through the loop (10,000,000,000,000,000 / 2^54). This means that

*on average*we shouldn't require more than a couple of encryptions

If you're not convinced, go grab a (slightly biased) coin and start flipping. You're done when you get tails.

**A note on security**

Cycling is not the only technique that Black and Rogaway propose. They have a couple of others that may be better for your specific application. You can check out the paper if this kind of thing interests you.

Another neat thing about the cycling technique is that it actually doesn't

*cost*you anything. This is a very cryptographer kind of thing to say, since it actually does cost you quite a lot --- in terms of time and extra computation. But what it doesn't cost you much of is

*security*. You might think that all this repetition might make the cipher somehow weaker. But Black and Rogaway show that it doesn't, at least not in the ideal case.

**In Summary: The real world**

So we know how to deterministically encipher arbitrary numbers. But a more important question is: is any of this a good idea?

Well, sometimes in security, application requirements trump perfection. Obviously it would be better to protect your data using standard, randomized encryption. This deterministic stuff only works well if there's a certain amount of unpredictability in the underlying data, and that's never guaranteed.

For example, your typical credit card number isn't really 16 digits of random data at all. Typically it's composed of a 4-6 digit Bank Identification Number (BIN), followed by a Personal Account Number (PAN), followed by a special checksum digit that's computed deterministically based on the previous digits. The Personal Account number is the really unpredictable bit, and if you use an American Express card (15 digits total), it could be as short as 8 digits.

That means in an extreme case, there might be only about 100 million possible PAN numbers. Still a pretty large number, but if you knew the BIN and could gain access to an encryption oracle that lets you encrypt arbitrary card numbers at a rate of 1,000 per second, you could generate every possible ciphertext in a little over a day. This would let you decrypt

*any*encrypted card number, simply by looking it up in a table.

Probably this scenario is unrealistic, but it's something to keep in mind if you plan to deploy a system like this.

*Notes:*

* Credit card numbers do have some structure, so in practice you could take advantage of that. For the moment, let's not.

** In practice, this application requires more thinking. CCNs contain routing information called the Bank Identification Number (the first few digits) as well as a meaningful checksum digit (the last one).

*** Though be careful with this: the permutation property only holds if you can encode your data into a single cipher block. If you have to break it up across multiple blocks, it doesn't work anymore.

**** As I've explained elsewhere, I am not being rude. In cryptography 'suck' is a purely technical term meaning "slow, complex, and probably insecure".

***** There are various specific attacks that dictate how many rounds you need to use. Don't take my word here as gospel.

****** Updated per Terence's comment below. However I'm not sure that this is comforting to anyone but a cryptographer :) Also see JoachimV's comment for some updates on the field.

Huh, I think this might help me with a problem I've been trying to solve (compression rather than cryptography, but strangely similar).

ReplyDeleteThis is my new favourite crypto blog, by the way.

Just as a quick note, one of the interesting properties of cycling is that it can't put you in an infinite loop since the cipher is a permutation. That permutation is composed of cycles, so at the very worst case, you send up right back where you started.

ReplyDeleteNice article. There have been quite a few developments since Black & Rogaway's paper in 2002.

ReplyDelete1. Bellare, Ristenpart, Rogaway and Stegers wrote a paper on Format Preserving Encryption http://eprint.iacr.org/2009/251.pdf

This paper formally defines FPE and security goals for it.

This paper also adds the concept of a tweakable block cipher to the FPE construction to significantly enhance security and resist dictionary attacks when encrypting small domains. Adding any other data available as input to the encryption process, aka the "tweak" (such as the BIN, expiration date, etc) significantly increases the size of the dictionary required to attack the encrypted value several orders of magnitude.

2. This paper resulted in proposal of an FPE standard at NIST with some very specific constructions including FFX-radix and VAES. NIST has announced the intent to approve these FPE schemes as standards: http://csrc.nist.gov/news_events/index.html#june9b

3. There is also work underway to mirror the NIST efforts and create a Financial Industry Standard for FPE in the ANSI X9 committee.