Monday, August 27, 2012

Reposted: A cryptanalysis of HDCP v2.1

Update 8/27: This post was originally published three weeks ago under a different title. I subsequently took it down to give affected vendors time to patch the bugs. As a result of the notification, Digital Content Protection LLC (DCP) has updated the spec to v2.2. 

Contrary to my understanding when I wrote the original post, HDCP v2 actually is used by a number of devicesI would like to give credit to Alon Ziv at Discretix, who had previously discovered the Locality Check issue, and to Martin Kaiser who experimentally verified the master secret issue on a European Samsung TV and a Galaxy S II.

Finally, I would like to thank Hanni Fakhoury and Marcia Hofmann at the Electronic Frontier Foundation for all of their helpful advice. The EFF is one of the only organizations that represents security researchers. Please consider donating so they can keep doing it!

Over the past couple of weeks I've mostly been blogging about inconsequential things. Blame summer for this -- it's hard to be serious when it's 104 degrees out. But also, the world just hasn't been supplying much in the way of interesting stuff to write about.

Don't get me wrong, this is a good thing! But in a (very limited) way it's also too bad. One of the best ways to learn about security systems is to take them apart and see how they fail. While individual systems can be patched, the knowledge we collect from the process is invaluable.

Fortunately for us, we're not completely helpless. If we want to learn something about system analysis, there are plenty of opportunities right out there in the wild. The best place to start is by finding a public protocol that's been published, but not implemented yet. Download the spec and start poking!

This will be our task today. The system we'll be looking at is completely public, and (to the best of my knowledge) has not yet been deployed anywhere (Update: see note above). It's good practice for protocol cryptanalysis because it includes all kinds of complicated crypto that hasn't been seriously reviewed by anyone yet.

(Or at least, my Google searches aren't turning anything up. I'm very willing to be corrected.)

Best of all, I've never looked at this system before. So whatever we find (or don't find), we'll be doing it together.

A note: this obviously isn't going to be a short post. And the TL;DR is that there is no TL;DR. This post isn't about finding bugs (although we certainly will), it's about learning how the process works. And that's something you do for its own sake.

HDCPv2

The protocol we'll be looking at today is the High Bandwidth Digital Content Protection (HDCP) protocol version 2. Before you get excited, let me sort out a bit of confusion. We are not going to talk about HDCP version 1, which is the famous protocol you probably have running in your TV right now.

HDCPv1 was analyzed way back in 2001 and found to be wanting. Things got much worse in 2010 when someone leaked the HDCPv1 master key -- effectively killing the whole system.

What we'll be looking at today is the replacement: HDCP v2. This protocol is everything that its predecessor was not. For one thing, it uses standard encryption: RSA, AES and HMAC-SHA256. It employs a certificate model with a revocation list. It also adds exciting features like 'localization', which allows an HDCP transmitter to determine how far away a receiver is, and stop people from piping HDCP content over the Internet. (In case they actually wanted to do that.)

HDCPv2 has barely hit shelves yet (Update: though it was recently selected as the transport security for MiraCast). The Digital Content Protection licensing authority has been keeping a pretty up-to-date set of draft protocol specifications on their site. The latest version at the time of this writing is 2.1, and it gives us a nice opportunity to see how industry 'does' protocols.

An overview of the protocol

As cryptographic protocols go, HDCPv2 has a pretty simple set of requirements. It's designed to protect  high-value content running over a wire (or wireless channel) between a transmitter (e.g., a DVD player) and a receiver (a TV). The protocol accomplishes the following operations:
  1. Exchanging and verifying public key certificates.
  2. Establishing shared symmetric keys between the transmitter and receiver.
  3. Caching shared keys for use in later sessions.
  4. Verifying that a receiver is local, i.e., you're not trying to proxy the data to some remote party via the Internet.
These functions are accomplished via three (mostly) separate protocols: a public-key Authenticated Key Agreement (AKE) protocol, a pairing protocol, where the derived key is cached for later use, and a locality check protocol to ensure that the devices are physically close.

I'm going to take these protocols one at a time, since each one involves its own messages and assumptions.

Phase (1): Authenticated Key Agreement (AKE)

The core of HDCPv2 is a custom key exchange protocol, which looks quite a bit like TLS. (In fact, the resemblance is so strong that you wonder why the designers didn't just use TLS and save a lot of effort.) It looks like this:

HDCPv2 key agreement protocol (source). Click the image to enlarge.
Now, there's lots going on here. But if we only look at the crypto, the summary is this:

The transmitter starts by sending 'AKE_Init' along with a random 64-bit nonce R_tx. In response, the receiver sends back its certificate, which contains its RSA public key and device serial number, all signed by the HDCP licensing authority.

If the certificate checks out (and is not revoked), the transmitter generates a random 128-bit 'master secret' K_m and encrypts it under the receiver's public key. The result goes back to the receiver, which decrypts it. Now both sides share K_m and R_tx, and can combine them using a wacky custom key derivation function. The result is a shared a session key K_d.

The last step is to verify that both sides got the same K_d. The receiver computes a value H', using HMAC-SHA256 on inputs K_d, R_tx and some other stuff. If the receiver's H' matches a similar value computed at the transmitter, the protocol succeeds.

Simple, right?

Note that I've ignored one last message in the protocol, which turns out to be very important. Before we go there, let's pause and take stock.

If you're paying close attention, you've noticed a couple of worrying things:
  1. The transmitter doesn't authenticate itself at all. This means anyone can pretend to be a transmitter.
  2. None of the handshake messages (e.g., AKE_Transmitter_Info) appear to be authenticated. An attacker can modify them as they transit the wire.
  3. The session key K_d is based solely on the inputs supplied by the transmitter. The receiver does generate a nonce R_rx, but it isn't used in the above protocol.
None of these things by themselves are a problem, but they make me suspicious.

Phase (2): Pairing

Public-key operations are expensive. And you only really need to do them once. The designers recognized this, and added a feature called 'pairing' to cache the derived K_m for use in later sessions. This is quite a bit like what TLS does for session resumption.

However, there's one catch, and it's where things get complicated: some receivers don't have a secure non-volatile storage area for caching keys. This didn't phase the designers, who came up with a 'clever' workaround for the problem: the receiver can simply ask the transmitter to store K_m for it.

To do this, the receiver encrypts K_m under a fixed internal AES key K_h (which is derived by hashing the receiver's RSA private key). In the last message of the AKE protocol the receiver now sends this ciphertext back to the transmitter for storage. This appears in the protocol diagram as the ciphertext E(K_h, K_m).

The obvious intuition here is that K_m is securely encrypted. What could possibly go wrong? The answer is to ask how K_m is encrypted. And that's where things get worrying.

According to the spec, K_m is encrypted using AES in what amounts to CTR mode, where the 'counter' value is defined as some value m. On closer inspection, m turns out to be just the transmitter nonce R_tx padded with 0 bits. So that's simple. Here's what it looks like:
Encryption of the master key K_m with the receiver key K_h. The value m is equal to (R_tx || 0x000000000000000).
Now, CTR is a perfectly lovely encryption mode provided that you obey one unbreakable rule: the counter value must never be re-used. Is that satisfied here? Recall that the counter m is actually chosen by another party -- the transmitter. This is worrying. If the transmitter wants, it could certainly ask the receiver to encrypt anything it wants under the same counter.

Of course, an honest transmitter won't do this. But what about a dishonest transmitter? Remember that the transmitter is not authenticated by HDCP. The upshot is that an attacker can pretend to be a transmitter, and submit her own K_m values to be encrypted under K_h and m.

Even this might be survivable, if it weren't for one last fact: in CTR mode, encryption and decryption are the same operation.

All of this leads to the following attack: 
  1. Observe a legitimate communication between a transmitter and receiver. Capture the values R_tx and E(K_h, K_m) as they go over the wire.
  2. Now: pretend to be a transmitter and initiate your own session with the receiver.
  3. Replay the captured R_tx as your initial transmitter nonce. When you reach the point where you pick the master secret, don't use a random value for K_m. Instead, set K_m equal to the ciphertext E(K_h, K_m) that you captured earlier. Recall that this ciphertext has the form:

    AES(K_h, R_Tx || 000...) ⊕ K_m  
     
    Now encrypt this value under the receiver's public key and send it along.
     
  4. Sooner or later the receiver will encrypt the 'master secret' you chose above under its internal key K_h. The resulting ciphertext can be expanded to:  
    AES(K_h, R_Tx || 000...) ⊕ AES(K_h, R_Tx || 000...) 
    ⊕ K_m
Thanks to the beauty of XOR, the first two terms of this ciphertext simply cancel out. The result is the original K_m from the first session! Yikes!

This is a huge problem for two reasons. First, K_m is used to derive the session keys used to encrypt HDCP content, which means that you may now be able to decrypt any past HDCP content traces. And even worse, thanks to the 'pairing' process, you may be able to use this captured K_m to initiate or respond to further sessions involving this transmitter.

Did I mention that protocols are hard?

Phase (3): The Locality Check

For all practical purposes, the attack above should be our stopping point. Once you have the stored K_m you can derive the session keys and basically do whatever you want. But just for fun, let's go on and see what else we can find.

At its heart, the locality check is a pretty simple thing. Let's assume the transmitter and receiver are both trusted, and have successfully established a session key K_d by running the AKE protocol above. The locality check is designed to ensure that the receiver is nearby -- specifically, that it can provide a cryptographic response to a challenge, and can do it in < 7 milliseconds. This is a short enough time that it should prevent people from piping HDCP over a WAN connection.

(Why anyone would want to do this is a mystery to me.)

In principle the locality check should be simple. In practice, it turns out to be pretty complicated. Here's the 'standard' protocol:
Simple version of the locality check. K_d is a shared key and R_rx is a receiver nonce.
Now this isn't too bad: in fact, it's about the simplest challenge-response protocol you can imagine. The transmitter generates a random nonce R_n and sends it to the receiver. The receiver has 7 milliseconds to kick back a response L', which is computed as HMAC-SHA256 of {the session key K_d, challenge nonce R_n, and a 'receiver nonce' R_rx}. You may recall that the receiver nonce was chosen during the AKE.

So far this looks pretty hard to beat.

But here's a wrinkle: some devices are slow. Consider that the 7 milliseconds must the round-trip communication time, as well as the time required to compute the HMAC. There is a very real possibility that some slower, embedded devices might be not be able to respond in time.

Will HDCP provide a second, optional protocol to deal with those devices? You bet it will.

The second protocol allows the receiver to pre-compute the HMAC response before the timer starts ticking. Here's what it looks like:

'Precomputed' version of the protocol.
This is nearly the same protocol, with a few small differences. Notably, the transmitter gives the receiver all the time it wants to compute the HMAC. The timer doesn't start until the receiver says it's ready.

Of course, there has to be something keeping the RTT under 7ms. In this case the idea is to keep the receiver from speaking until it's received some authenticator from the transmitter. This consists of the least significant 128-bits of the expected HMAC result (L'), which is computed in the same way as above. The receiver won't speak until it sees those bits. Then it'll it kick back its own response, which consists of the most-significant 128 bits of the same value.

Ok, so here we have a protocol that's much more complicated. But considered its own, this one looks pretty ok by me.

But here's a funny question: what if we try running both protocols at once?

No, I'm not being ridiculous. You see, it turns out that the receiver and transmitter get to negotiate which protocol they support. By default they run the 'simple' protocol above. If both support the pre-computed version, they must indicate this in the AKE_Transmitter_Info and AKE_Receiver_Info messages sent during the handshake.

This leads to the following conjecture: what if, as a man-in-the-middle attacker, we can convince the transmitter to run the 'pre-computed' protocol. And at the same time, convince the receiver to run the 'simple' one? Recall that none of the protocol flags (transmitted during the AKE) are authenticated. We might be able to trick both sides into seeing a different view of the other's capabilities.

Here's the setup: we have a receiver running in China, and a transmitter located in New York. We're a man-in-the-middle sitting next to the transmitter. We want to convince the transmitter that the receiver is close -- close enough to be on a LAN, for example. Consider the following attack:
  1. Modify the message flags so that the transmitter thinks we're running the pre-computed protocol. Since it thinks we're running the pre-computed protocol, it will hand us R_n and then give us all the time in the world to do our pre-computation.
     
  2. Now convince the receiver to run the 'simple' protocol. Send R_n to it, and wait for the receiver to send back the HMAC result (L').
     
  3. Take a long bath, mow the lawn. Watch Season 1 of Game of Thrones.
     
  4. At your leisure, send the RTT_READY message to the transmitter, which has been politely waiting for the receiver to finish the pre-computation
     
  5. The transmitter will now send us some bits. Immediately send it back the most significant bits of the value L', which we got in step (2).
     
  6. Send video to China.
Now this attack may not always work -- it hinges on whether we can convince the two parties to run different protocols. Still, this is a great teaching example in that it illustrates a key problem in cryptographic protocol design: parties may not share the same view of what's going on

A protocol designer's most important job is to ensure that such disagreements can never happen. The best way to do this is to ensure that there's only one view to be had -- in other words, dispense with all the options and write a single clear protocol. But if you must have options, make sure that the protocol only succeeds if both sides agree on what those options are. This is usually accomplished by authenticating the negotiation messages, but even this can be a hard, hard problem.

Compared to the importance of learning those lessons, actually breaking localization is pretty trivial. It's a stupid feature anyway.

In Conclusion

This has been a long post. To the readers I have left at this point: thanks for sticking it out. 

The only remaining thing I'd like to say is that this post is not intended to judge HDCPv2, or to make it look bad. It may or it may not be a good protocol, depending on whether I've understood the specification properly and depending on whether the above flaws make it into real devices. Which, hopefully they won't now.

What I've been trying to do is teach a basic lesson: protocols are hard. They can fail in ruinous, subtle, unexpected, exciting ways. The best cryptographers -- working with BAN logic analyzers and security proofs -- still make mistakes. If you don't have those tools, steer clear.

The best 'fix' for the problem is to recognize how dangerous protocols can be,and to avoid designing your own. If you absolutely must do so, please try to make yours as simple as possible. Too many people fail to grok this lesson, and the result is, well, HDCPv2.

===

Update 8/27: As I mentioned above, DCP has released a new version of the specification. Version 2.2 includes several updates: it changes the encryption of Km to incorporate both the Transmitter and Receiver nonces. It also modifies the locality check to patch the bug described above. Both of these changes appear to mitigate the bugs above, at least in new devices.

Saturday, August 18, 2012

Dear Apple: Please set iMessage free

Normally I avoid complaining about Apple because (a) there are plenty of other people carrying that flag, and (b) I honestly like Apple and own numerous lovely iProducts. I'm even using one to write this post.

Moroever, from a security point of view, there isn't that much to complain about. Sure, Apple has a few irritating habits -- shipping old, broken versions of libraries in its software, for example. But on the continuum of security crimes this stuff is at best a misdemeanor, maybe a half-step above 'improper baby naming'. Everyone's software sucks, news at 11.

There is, however, one thing that drives me absolutely nuts about Apple's security posture. You see, starting about a year ago Apple began operating one of the most widely deployed encrypted text message services in the history of mankind. So far so good. The problem is that they still won't properly explain how it works.

And nobody seems to care.

I am, of course, referring to iMessage, which was deployed last year in iOS Version 5. It allows -- nay, encourages -- users to avoid normal carrier SMS text messages and to route their texts through Apple instead.

Now, this is not a particularly new idea. But iMessage is special for two reasons. First it's built into the normal iPhone texting application and turned on by default. When my Mom texts another Apple user, iMessage will automatically route her message over the Internet. She doesn't have to approve this, and honestly, probably won't even know the difference.

Secondly, iMessage claims to bring 'secure end-to-end encryption' (and authentication) to text messaging. In principle this is huge! True end-to-end encryption should protect you from eavesdropping even by Apple, who carries your message. Authentication should protect you from spoofing attacks. This stands in contrast to normal SMS which is often not encrypted at all.

So why am I looking a gift horse in the mouth? iMessage will clearly save you a ton in texting charges and it will secure your messages for free. Some encryption is better than none, right?

Well maybe.

To me, the disconcerting thing about iMessage is how rapidly it's gone from no deployment to securing billions of text messages for millions of users. And this despite the fact that the full protocol has never been published by Apple or (to my knowledge) vetted by security experts. (Note: if I'm wrong about this, let me know and I'll eat my words.)

What's worse is that Apple has been hyping iMessage as a secure protocol; they even propose it as a solution to some serious SMS spoofing bugs. For example:
Apple takes security very seriously. When using iMessage instead of SMS, addresses are verified which protects against these kinds of spoofing attacks. One of the limitations of SMS is that it allows messages to be sent with spoofed addresses to any phone, so we urge customers to be extremely careful if they're directed to an unknown website or address over SMS. 
And this makes me nervous. While iMessage may very well be as secure as Apple makes it out to be, there are plenty of reasons to give the protocol a second look.

For one thing, it's surprisingly complicated.

iMessage is not just two phones talking to each other with TLS. If this partial reverse-engineering of the protocol (based on the MacOS Mountain Lion Messages client) is for real, then there are lots of moving parts. TLS. Client certificates. Certificate signing requests. New certificates delivered via XML. Oh my.

As a general rule, lots of moving parts means lots of places for things to go wrong. Things that could seriously reduce the security of the protocol. And as far as I know, nobody's given this much of  a look. It's surprising.

Moreover, there are some very real questions about what powers Apple has when it comes to iMessage. In principle 'end-to-end' encryption should mean that only the end devices can read the connection. In practice this is almost certainly not the case with iMessage. A quick glance at the protocol linked above is enough to tell me that Apple operates as a Certificate Authority for iMessage devices. And as a Certificate Authority, it may be able to substantially undercut the security of the protocol. When would Apple do this? How would it do this? Are we allowed to know?

Finally, there have been several reports of iMessages going astray and even being delivered to the wrong (or stolen) devices. This stuff may all have a reasonable explanation, but it's yet another set of reasons why we it would be nice to understand iMessage better than we do now if we're going to go around relying on it.

So what's my point with all of this?

This is obviously not a technical post. I'm not here to present answers, which is disappointing. If I knew the protocol maybe I'd have some. Maybe I'd even be saying good things about it.

Rather, consider this post as a plea for help. iMessage is important. People use it. We ought to know how secure it is and what risks those people are taking by using it. The best solution would be for Apple to simply release a detailed specification for the protocol -- even if they need to hold back a few key details. But if that's not possible, maybe we in the community should be doing more to find out.

Remember, it's not just our security at stake. People we know are using these products. It would be awfully nice to know what that means.

Wednesday, August 15, 2012

On Gauss

If you pay attention to this sort of thing, you've probably heard about the new state-sponsored malware that's making the rounds in the Middle East. It's called 'Gauss', and like its big brother Flame, it was discovered by Kaspersky Labs (analysis at the link).

I don't have much to say about Gauss that hasn't been covered elsewhere. Still, for those who don't follow this stuff routinely, I thought I might describe a couple of the neat things we've learned about it.

Here's the nutshell summary: Gauss is your basic run-of-the-mill government-issued malware, highly modularized and linked to the same C&C infrastructure that Flame used. It seems mainly focused on capturing banking data, but (as I'll mention in a second) it may do other things. Unlike Flame, there's no evidence that Gauss uses colliding MD5 certificates to get itself onto a host system. Though, in fairness, we may not yet have the complete picture at this point.

So if there are no colliding certificates, what's interesting about Gauss? So far as I can tell, only two things. First, it installs a mystery font. Second -- and far more interesting -- it contains an encrypted payload.

Palida Narrow. Every Gauss-infected system gets set up with a new font called Palida Narrow, which appears to be a custom-generated variant of Lucida Bright with some unusual glyphs in it. From Kaspersky's report:
[Gauss] creates a new TrueType font file “%SystemRoot%\fonts\pldnrfn.ttf” (62 668 bytes long) from a template and using randomized data from the ShutdownInterval key.
Now this looks exciting! Unfortunately Kaspersky has not explained how the randomization works, or indeed if the data is truly random. This leaves us with nothing to do but speculate.

And plenty of folks have. Theories range from the practical (remote host detection) to the slightly wild (on-site vulnerability fuzzing). My favorite is the speculation that Palida is used to steganographically fingerprint the author of certain printed materials. While this theory is almost certainly wrong, it's not completely nuts, and even has some precedent in the research literature.

The 'Godel' payload. For all the excitement about fonts, the big news of Gauss is the presence of an encrypted module called 'Godel'.

Godel should be setting your hair on fire, if only because it attempts to replicate itself via a vulnerability in the code that Windows uses to handle USB sticks. This the very same vector that Stuxnet used to infect the air-gapped centrifuge controllers at Natanz. It's a good indicator that Godel is targeted at a similarly air-gapped system.

Of course the question is: which system? Godel goes to great lengths to ensure that we don't know.

Presumably the designers made this decision based on some bitter experience with Stuxnet, which didn't protect its code at all. The result in Stuxnet's case was that researchers quickly decompiled the payload and identified the parameters that it looked for in a target systems. Somebody -- presumably Stuxnet's handlers -- were unhappy about this: on July 15, 2010 a distributed denial of service attack crippled the industrial control mailing listservs where this was being discussed.

To avoid a repeat of this episode, Gauss's designers chose to encrypt the Godel payload under a key derived from a specific configuration on the targeted computer.

The details can be found in this Kaspersky post. To make a long story short, Godel derives an encryption key by repeatedly MD5 hashing a series of (salted) executable filenames and paths located on the target system. Only a valid entry will unlock the program sections, allowing Godel to do its job.

Example of the salted file path/executable name pair. From Kaspersky.
The key derivation process is performed using 1000 10,000 iterations of MD5. The resulting key is fed to  RC4. While the use of RC4 and MD5 may seem a little bit archaic (c'mon guys, get with the 21st century!), it likely reflects a decision to use the Microsoft CryptoAPI across a broad range of Windows versions rather than some sort cryptographic retro-fetish.

The real question is: how well does it work?

Probably very well, with caveats. Kaspersky says they're looking for a world-class cryptographer to help them crack the code. What they should really be looking for is someone with a world-class GPU.

As best I can see, the only limitation of Gauss's approach is that the designers should have used a more time-intensive function to derive their keys. 1000 10,000 iterations of MD5 sounds like a lot, but really it isn't; not in a world with efficient GPUs that you can rent. This code won't be broken based on weaknesses in RC4 or MD5. It will be broken by exhaustively searching the file path/name space using a GPU (or even FPGA)-based system, or possibly just getting lucky.

No doubt Kaspersky is working on such a project right now. If we learn anything more about the mystery of Godel, it will almost certainly come from that work.

Tuesday, August 7, 2012

A missing post (updated)

Update (8/27): I've put the original post back up.

Update (8/9): I've re-written this post to include a vague, non-specific explanation of the bug. I've now confirmed the problem with one vendor, who has asked for a week to issue a patch. So far I haven't had a response from the DCP's technical people. And yes, I do realize someone PasteBinned the original post while it was up.

A few people have asked what happened to the post that was in this space just a few hours ago. No, you're not going crazy. It was here.

The post contained a long, detailed evaluation of the HDCP v2 protocol. My idea was to do real-time evaluation of an industry protocol that hasn't been deployed yet -- a kind of 'liveblogging' cryptanalysis. What I expected to find was some bad practices, which I would gently poke fun at. I didn't expect to find anything serious.

I was wrong in that initial judgement, with some caveats. I'm going to give a vague and non-specific summary here, and I hope to re-post the detailed technical post in a few days when I've heard (something, anything!) from DCP, the organization that maintains HDCP.

In case you've never heard of it, HDCP is a security protocol used to 'protect' video traveling over wired and wireless networks. There are two versions. Version 1 is in your TV today, and was seriously compromised in 2010. Version 2 is much better, but has only been deployed in a few products -- including those that implement MiraCast (formerly Wi-Fi Display).

Version 2 contains a key agreement protocol that's designed to establish a session encryption key between a transmitter (your phone, for example) and a receiver (a TV). Once this key is established, the transmitter can encrypt all video data going over the wire.

What I discovered in my brief analysis is a flaw in the key agreement protocol that may allow a man-in-the-middle to recover the session key (actually the 'master secret' used to derive the session key). This could potentially allow them to decrypt content. More on that in a minute, though.

I also discovered some slightly less serious flaws elsewhere in the protocol. It turns out that the DCP already knows about those, thanks to some enterprising work by a smart guy at an unnamed vendor (who deserves credit, and will get it once I put the original post back up).

Now for a few big caveats about the session key bug.

The bug I found does not get you all the way to decrypting HDCPv2 streams in practice, thanks to a tiny additional protection I missed while writing the original post. I don't think much of this protection, since it involves a secret that's stored in every single HDCPv2-compliant device. That's a pretty lousy way to keep a secret.

And of course I haven't personally verified this in any real HDCP devices, since I don't own any. Although if I did, I could use this nifty HDCP plugin for WireShark to do some of the work.

The issue has been confirmed by one vendor, who is working on a patch for their product. Their products are used in real things that you've heard of, so I'm trusting that they'd know.

The last thing I want to address is why I published this, and why I subsequently pulled it.

When I wrote the original post I thought HDCP v2 was just a 'paper spec' -- that there were no devices actually using it. Shortly after posting, I came across one commercial product that does claim to support HDCPv2. Later I discovered a few others. To be on the safe side, I decided to pull the post until I could notify the vendors. Then through sheer ineptitude I briefly re-posted it. Now I'm doing my best to put the toothpaste back in the tube.

As soon as I get some feedback I intend to put the post back up. A post which, incidentally, was not intended to break anything, but rather to serve as a lesson in just how complicated it is to design your own protocol. I suppose it's achieved that goal.

Anyway, I'm putting this up as a placeholder in case you're curious about what happened or why the heck I'm not blogging. Writing a long technical post and then having to can it is a drag. But hopefully we'll be back to our regularly-scheduled programming in no time at all.