*This is part 3 of a series on the Random Oracle Model. See here for the previous posts:*

*Part 1: An introduction*

*Part 2: The ROM formalized, a scheme and a proof sketch*

Photo by Flickr user greeblie, used under a CC license. |

Please bear with me. We're going to get back to some solid,

*practical*crypto with the next post in the series.

**A quick recap**

In the last two posts we've covered a bit of ground. We talked about cryptographic hash functions and their properties, and discussed why the cryptographer's 'ideal' hash function would be a random function. We noted that random functions would never work in real life (too big to store, too slow to compute), so we came to a strange resolution: we would

*pretend*that our hashes were random functions, just for the purposes of our security analysis. Then of course we would use real hash functions like SHA.

Of course, even this required us to tamper with the fabric of the universe, since we want to restrict our 'adversaries' to running in a reasonable amount of time, and they can't do that if they're busy evaluating random functions. So we created a new

*model of computation*(aka 'thought experiment') called the Random Oracle Model. In this model all hashing is computed by a magical third party called an

*oracle.*

If you know a little about the random oracle model, say from reading the previous posts, you could be forgiven for thinking that it isn't all

*that*bad. Basically we're assuming that hash functions are super-duper "random". Maybe a little more so than a real function can be, but big deal.

But unfortunately it's much more than that.

**The world's shortest lesson on provable security**

To explain what I mean, we first need to understand a little about provable security, and particularly, a type of proof called a

*reduction proof.*

When we say that an encryption scheme is "provably secure", we usually mean that it's secure

*under some assumption*. For most practical schemes the assumption is mathematical in nature. For example, we might found an encryption scheme on the assumption that it's hard to factor large composite numbers, or to find the shortest vector in a lattice. In this case 'hard' doesn't mean 'I can't figure out how to do it'. It means:

*there*

*exists*

*no (efficient) algorithm*

*that can do it*.

^{1}

Now, I mentioned that these are

*assumptions.*We don't absolutely know that these problems are hard. However, we like to think that if some bright people have looked at the problem for a while, and nobody has made any progress towards solving it, that means it's probably not easy. In some cases the problem may even fall into a useful class of problems,

*i.e.,*it may be NP-Hard.

Even if you're not willing to blindly trust in these assumptions, security proofs are

*still*useful. Analyzing a new cryptosystem can soak up huge amounts of time and energy. If we can at least 'reduce' each new scheme to one of a small number of mathematical problems, we can pour our brainpower into attacking just those problems, knowing that our work will apply to

*every*scheme that uses them.

All of this brings me to my main point, namely,

*how do these proofs work*

*.*Getting specific, the general flow of such a proof is as follows:

- Assume that problem X is hard.
^{1} - Show that
*if there exists*an (efficient) algorithm that "breaks" our cryptographic scheme,*then there exists*an (efficient) algorithm that solves problem X. - Snottily point out that (2) means if the scheme
*can*be broken, then problem X*can't be*hard.

Clearly point (3) cannot be true if point (1) is. Following the logic, this allows us to conclude that

*if Problem X is hard*, then there*can't*be an algorithm that breaks our encryption scheme.
Of course I haven't actually explained how to actually accomplish step (2), which is pretty important. It's also the beautiful part. You see, in most reductions, step (2) consists of

*actually writing*an efficient algorithm that solves problem X.
You probably think I'm crazy. I just claimed that problem X can't be efficiently solved, now I'm telling you that we write an algorithm for solving it. But the genius of it is that our solver algorithm won't quite work. It's missing an important subroutine.

That subroutine is

*identical --*at the API level -- to an algorithm that 'breaks' our encryption scheme.
So here is how the argument breaks down:

*if there was*, hypothetically, an algorithm that efficiently 'broke' our encryption scheme, then we could plug it into the hole we left in our 'solver' algorithm. This would yield an algorithm that efficiently solves problem X. Hence, it completely satisfies what we're asking for in point (2).**Black Boxes**

With this foundation laid, I only need to make one more point. There's a basic restriction that standard reduction proofs must obey. Our 'Problem X solver' algorithm must use the 'encryption scheme breaker' subroutine as a

*black box*. It cannot make assumptions about how the breaker algorithm works internally.

Let me be more clear about this. We cannot, for example, have a step in our 'solver' algorithm where we decompile the breaker algorithm, or peek inside its internal memory registers, and then assume that they'll contain a partial result we need.

This is because our reduction only works if it works with

*every possible algorithm that breaks the scheme.*Since we can't guess a priori how such an algorithm might work internally, we can't depend on its internal workings. The breaker algorithm could hypothetically be horrifically obfuscated, to the point where we couldn't understand it

*even if we held the code in our hands*.

Thus in a traditional black box reduction, we can count on the breaker algorithm having a standard "API" (which we'll define at some point). This typically takes in public keys and/or ciphertexts (or signatures, etc.) It outputs some useful information that constitutes our definition of 'breaking'. And though it doesn't always need to succeed, we expect that it does so with reasonable probability. But everything that happens inside the algorithm must be hidden from us.

**Dirty laundry**

You wouldn't think this has anything to do with provable security. |

Imagine you're a muckraking celebrity reporter who's been assigned to get a scoop on Brad Pitt and Angelina Jolie's latest adopted baby. The Pitt/Jolies are extremely private, and won't even acknowledge that they're adopting. You're a clever reporter, and more importantly, you have no morals.

You know that if you could just get a peek at the family laundry, you'd have your whole story. Sex, size, weight, favorite color. Even diet (Angelina insists on eco-safe cloth diapers). In a perfect world you could drive over to Pitt/Jolie's laundry company, slip someone $500, and lo -- your whole story would be there in the clothes. If you were

*really*devious, you could start your own celebrity laundry service and trick them into sending it right to you.

But this is all wishful thinking. Pitt and Jolie don't send out their laundry -- they have it done at home. In truth, you're not sure that they even

*do*laundry. You heard a rumor that Angelina has her clothes ritually burned after each use.

In case you don't see the relevance, this is an analogy to reduction proofs. To you, the 'breaker' algorithm is a black box. You can see inputs go in, and see what comes out, but you can't see what it's doing in between. If your reduction depends on peeking at its dirty laundry, you'd better come up with a better reduction. It does its own laundry.

**Back to the Random Oracle Model**

The Random Oracle Model is different. In my previous post I explained that it's just like the real world, with one exception: nobody can compute hashes by themselves. Every party -- including the adversary (synonymous with 'breaker' algorithm) --

*must*hash by sending messages to a special outside party. This party computes the hashes using a random function, and sends them back to the caller.

To make this abundantly clear, in the Random Oracle Model,

*everyone sends out their laundry*.

The Random Oracle Model. The oracle computes hashes for all parties, including the adversary. |

*must*call outside of itself if it ever wants to hash something.

If we're building a reduction, that means we can take advantage of the 'extra' info that leaks from these calls, just as our gossip reporter could take advantage of celebrities' laundry. We know that the algorithm has to make these hash calls, so there must exist a 'port' in the API for it to make them. Once we make this stipulation, our reduction can tap into this port and

*intercept*the adversary's hash calls to see which messages it's asking for, and possibly even tamper with the results it gets back.

In an extreme case, our reduction can even run its own fake 'oracle' (aka laundry service), as long as we always hand back convincing responses.

And while this may not seem a big deal, it really is. Just as Brangelina's dirty laundry could reveal major details about the stars' private life, the choice of which messages a 'breaker' algorithm chooses to hash can reveal major details about what it's thinking internally. By tampering with those responses we can also fundamentally change its operation. These details can be the difference between a reduction proof that

*works*, and one that doesn't.

**Taking Stock**

At this point, I think it's worth taking a breath and asking what we've gotten ourselves into here. When we started with this nonsense, all we wanted was a better hash function. One that looked slightly more 'random' than any real function we could construct.

Along the way we realized that random functions were totally impractical. Even to

*pretend*we could use them, we had to tamper with our computational model and "pull the random function" outside of the parties. After all, they run in polynomial time and couldn't possibly evaluate such a function.

Each of these steps seemed perfectly reasonable, and yet somewhere we jumped the shark. We took the most sacrosanct principle of reductionist security and violated it. Instead of "our argument works with any adversary", we now say "our argument works only with adversaries who kindly tell us whatever and whenever they need to hash".

Real adversaries don't work like this. If someone comes up with an algorithm that breaks an implementation of RSA-OAEP that uses a

*real*hash function (say SHA256), that algorithm won't need to tell us what it's hashing. It will just do it internally. And if its code is obfuscated, we won't be even able to connect it to our reduction, if our reduction depends on knowing (or tampering with) the hash calls.

**An Example: RSA Signing**

At this point I've run out of ways to talk about the Random Oracle Model without actually invoking a real scheme. So let's bring this back to something concrete.

The RSA scheme is probably the most famous cryptosystem of the modern age. Many people mistakenly believe that RSA is based on the hardness of factoring large integers. In fact, that's not quite accurate. RSA is actually based on a slightly different, but related assumption not-creatively named the "RSA assumption".

This assumption holds that the following problem is fundamentally hard, at least when the numbers get sufficiently large.

^{1}Given three values (N, e, M), where:

- N is the product of two prime numbers (p, q) that you don't know.
- 2 < e < N and gcd(e, (p-1)(q-1)) = 1.
- 0 <= M < N.

*mod*N = M.

^{2}

It's conjectured that this problem is hard

^{1}if you only have the values given above. However, if you

*know*(p, q) or an additional 'secret key' derived from those values, it's quite easy to solve.

The RSA signature scheme depends on this. In a textbook RSA signature, you "sign" a message by first encoding it as an integer m, such that 0 <= m < N. Then, using your secret key you compute a 'signature' "s", such that s^e

*mod*N = m. (See here for the exact algorithm.)

The intuitive argument for RSA signatures rests on three legs:

- Since only you know the secret key, it's easy for you to "sign" messages of your choice.
- Anybody else can
*verify*your signatures given (m, s) and the "public key" (N, e). - Intuitively, nobody else can
*sign*messages on your behalf (this is known as 'forging'), because doing so without the secret key would be equivalent to solving the RSA problem.

If you look carefully, what the RSA assumption

*actually*implies is that it's hard to come up with a "signature"(S) on a

*particular*message (M). If the adversary has freedom to choose the message himself, then it's easy for him to come up with a whole bunch of 'signed messages'. This makes textbook RSA signatures pretty useless.

^{3}

More importantly, to prove that RSA signatures are secure under the RSA

*assumption*, we can't just have the adversary spitting out any old message/signature pair. To make a reduction work, we must use the adversary to solve a

*specific*RSA problem instance (N, e, M). That means

*whatever the adversary outputs,*we must be able to use that value in order to compute an "S" that satisfies S^e

*mod*N = M.

And now here, finally is where the hash functions come through for us.

**Adding a Hash**

I've mentioned previously that most signature schemes don't literally sign a raw message. Rather, they sign

*the hash of*a message. This is convenient, since messages can be large and hashes are usually small. But in the Random Oracle Model it has a special significance.

Let's say that we have a hash function H() with the odd property that it outputs an integer between 0 and N-1. This is called a "full domain hash". This is a little unusual for a hash function, but I promise that such things can be built out of more "traditional" hash functions like SHA.

Instead of signing the message directly, we could have our signer first

*hash*the message (which can now be an arbitrary string), and sign the resulting hash. More formally, a signature "s" must now satisfy:

s^eCan we prove that this scheme is secure under the RSA assumption?modN = H(message)

**A proof sketch**

Imagine that A is some algorithm that breaks the signature scheme, meaning that it takes in a public key (N, e), and eventually outputs an "existential forgery", i.e.,

*any*pair (message, s') that satisfies the verification equation above.

Our goal is to design a solving algorithm for that can take in an arbitrary RSA problem instance (N, e, M) and somehow force A to help us solve it. Our goal is to eventually spit out a value S where S^e

*mod*N = M.

First, our solver will send A a "public key" consisting of (N, e). Since we're in the Random Oracle Model, A will eventually have to hash something. Whenever A asks to hash a message, we'll intercept its attempt to call the oracle, pick a

*random*value 0 <= r < N and answer its query as follows:

H(message) = M * r^e

*mod*N

I will claim that since "r" is random, then our "hash" result is also randomly-distributed, and thus "looks right" to A. So he won't twig to what we're doing. Most importantly, we've managed to tie every single hash value back to the one value "M" that we took from the RSA problem instance. Of course, we'll also store each value "r" so we can retrieve it later.

We assumed that eventually A will output a valid "forged" message/signature pair of the form (message, s'), where this pair satisfies s'^e

*mod*N = H(message). If this is the case, then all we have to do is look up the "r" value used to compute H(message) and output:

S = s' / r

*mod*N

*(or more accurately,*s' * inverse(r) mod N)

This is the answer to our RSA problem instance, since S^e mod N = M.

^{4},

^{5}

Why does this work? Well, let's simplify the equation "s'^e mod N = H(message)" a bit:

s'^e

*mod*N = H(message)

s'^e

*mod*N = M * r^e

*mod*N

*(this is the hash we computed)*

s' = M^{1/e} * r

*mod*N

*(raising both sides to the inverse of e)*

*s' = S * r*

*mod*N

*(because S = M^{1/e}.*

*)*

A full version of this proof can be found here. None of it works in the real world, of course. The minute you sub in a

*real*hash function, this whole proof is bunk.

**A few notes and a conclusion**

This has been a

*long*post; even my footnotes are getting out of control. I do hope I was able to make my main point, which is that the Random Oracle Model is both deeply unnatural, and deeply useful to cryptographers writing security proofs.

This is a dangerous combination, since it allows us to take shortcuts and make assumptions, that simply can't be supported in the real world we live in. And yet, it's so powerful that we can find it hard to stop.

More to the point, even when

*cryptographers*stop relying on this crutch (and it's started to happen), we have a hard time convincing

*end users*to get rid of schemes that have been proven in the ROM. Heck, we have a hard time convincing our friends in the systems security community. They think we're nuts.

And it's hard to blame them. Even when we have proper replacements, they're usually funny looking, almost always more "costly", and sometimes rest on bizarre new mathematical assumptions that people don't trust. Plus, it's not like the random oracle schemes are

*broken*.

In my next post I'm going to talk about all this, pull back from the crypto a bit, and focus on this dispute. I'll also talk a bit about the state of the art in ROM/non-ROM schemes.

And oh yes, I'll explain why cryptographers have started to get religion on this point. Hint: someone proved something. Something very bad for the ROM.

*Click here for the next post in this series.*

*Notes:*

1.

*More formally, saying that a problem is "hard" means that that all probabilistic polynomial time Turing machines solve it with at most negligible probability.*

2. For a quick explanation of modular arithmetic, see here.

3. For example, he can pick an arbitrary number as "S", then -- using only the public key -- work backwards to compute "M" as M = S^e mod N. This pair (M, S) is a valid message/signature pair. Similarly, he can mangle legitimate message/signature pairs in various ways, to get new ones.

4. In practice, there's no division

*mod*N. Instead, we compute the inverse of

*"*r" mod N, then multiply by that number.

5. In a real proof, we would also consider the possibility that the adversary can ask for and obtain signatures on

*chosen messages.*Our reduction would have to answer these requests. This can be done, but it adds a bit of complexity. See the full proof for some details on this process.

"The minute you sub in a real hash function, this whole proof is bunk." Yup, a good example of that is when researchers exploited weaknesses with MD5 to create a rogue certificates http://www.win.tue.nl/hashclash/rogue-ca/

ReplyDeleteI just wanted to say though that I'm really enjoying your series on the random oracle model. I don't mind the length at all, and I'm looking forward to reading more about it. So thanks!

I don't think you understand what Dr. Green is saying. The weakness in the ROM has nothing to do with the strength or weakness of the hash function used to simulate it; thus, your MD5 example is irrelevant. You could have a perfect hash function that is completely one-way and (unlike MD5) collision-proof; it's still not a random oracle.

DeleteThe important point is in the step

>> H(message) = M * r^e mod N

Using a random oracle, you can assume that such a message exists. Using a hash function - even one that can't be broken - there is no guarantee that such a message exists. Indeed, for a given triple (N,e,M), it may be theoretically possible to construct a hash function H() such that for any r, M*r^e mod N is *never* a valid hash.

Thanks to this post, I'm eventually beginning to realize the very concept of Random Oracle Model.

ReplyDelete