Statement on DMCA lawsuit

My name is Matthew Green. I am a professor of computer science and a researcher at Johns Hopkins University in Baltimore. I focus on computer security and applied cryptography.

Today I filed a lawsuit against the U.S. government, to strike down Section 1201 of the Digital Millennium Copyright Act. This law violates my First Amendment right to gather information and speak about an urgent matter of public concern: computer security. I am asking a federal judge to strike down key parts of this law so they cannot be enforced against me or anyone else.

A large portion of my work involves building and analyzing the digital security systems that make our modern technological world possible. These include security systems like the ones that protect your phone calls, instant messages, and financial transactions – as well as more important security mechanisms that safeguard property and even human life.

I focus a significant portion of my time on understanding the security systems that have been deployed by industry. In 2005, my team found serious flaws in the automotive anti-theft systems used in millions of Ford, Toyota and Nissan vehicles. More recently, my co-authors and I uncovered flaws in the encryption that powers nearly one third of the world’s websites, including Facebook and the National Security Agency. Along with my students, I’ve identified flaws in Apple’s iMessage text messaging system that could have allowed an eavesdropper to intercept your communications. And these are just a sampling of the public research projects I’ve been involved with.

I don’t do this work because I want to be difficult. Like most security researchers, the research I do is undertaken in good faith. When I find a flaw in a security system, my first step is to call the organization responsible. Then I help to get the flaw fixed. Such independent security research is an increasingly precious commodity. For every security researcher who investigates systems in order to fix them, there are several who do the opposite – and seek to profit from the insecurity of the computer systems our society depends on.

There’s a saying that no good deed goes unpunished. The person who said this should have been a security researcher. Instead of welcoming vulnerability reports, companiesroutinely threaten good-faith security researchers with civil action, or even criminal prosecution. Companies use the courts to silence researchers who have embarrassing things to say about their products, or who uncover too many of those products’ internal details. These attempts are all too often successful, in part because very few security researchers can afford a prolonged legal battle with well-funded corporate legal team.

This might just be a sad story about security researchers, except for the fact that these vulnerabilities affect everyone. When security researchers are intimidated, it’s the public that pays the price. This is because real criminals don’t care about lawsuits and intimidation – and they certainly won’t bother to notify the manufacturer. If good-faith researchers aren’t allowed to find and close these holes, then someone else will find them, walk through them, and abuse them.

In the United States, one of the most significant laws that blocks security researchers is  Section 1201 of the Digital Millennium Copyright Act (DMCA). This 1998 copyright law instituted a raft of restrictions aimed at preventing the “circumvention of copyright protection systems.” Section 1201 provides both criminal and civil penalties for people who bypass technological measures protecting a copyrighted work. While that description might bring to mind the copy protection systems that protect a DVD or an iTunes song, the law has also been applied to prevent users from reverse-engineering software to figure out how it works. Such reverse-engineering is a necessary party of effective security research.

Section 1201 poses a major challenge for me as a security researcher. Nearly every attempt to analyze a software-based system presents a danger of running afoul of the law. As a result, the first step in any research project that involves a commercial system is never science – it’s to call a lawyer; to ask my graduate students to sign a legal retainer; and to inform them that even with the best legal advice, they still face the possibility of being sued and losing everything they have. This fear chills critical security research.

Section 1201 also affects the way that my research is conducted. In a recent project – conducted in Fall 2015 – we were forced to avoid reverse-engineering a piece of software when it would have been the fastest and most accurate way to answer a research question. Instead, we decided to treat the system as a black box, recovering its operation only by observing inputs and outputs. This approach often leads to a less perfect understanding of the system, which can greatly diminish the quality of security research. It also substantially increases the time and effort required to finish a project, which reduces the quantity of security research.

Finally, I have been luckier than most security researchers in that I have access to legal assistance from organizations such as the Electronic Frontier Foundation. Not every security researcher can benefit from this.

The risk imposed by Section 1201 and the heavy cost of steering clear of it discourage me – and other researchers — from pursuing any project that does not appear to have an overwhelming probability of success. This means many projects that would yield important research and protect the public simply do not happen.

In 2015, I filed a request with the Library of Congress for a special exemption that would have exempted good faith security researchers from the limitations of Section 1201. Representatives of the major automobile manufacturers and the Business Software Alliance (a software industry trade group) vigorously opposed the request. This indicates to me that even reasonable good faith security testing is still a risky proposition.

This risk is particularly acute given that the exemption we eventually won was much more limited than what we asked for, and leaves out many of the technologies with the greatest impact on public health, privacy, and the security of financial transactions.

Section 1201 has prevented crucial security research for far too long. That’s why I’m seeking a court order that would strike Section 1201 from the books as a violation of the First Amendment.

A letter from US security researchers

This week a group of more than fifty prominent security and cryptography researchers signed a letter protesting the mass surveillance efforts of the NSA, and attempts by NSA to weaken cryptography and privacy protections on the Internet. The full letter can be found here.

Most of you have already formed your own opinions on the issue over the past several months, and it’s unlikely that one letter is going to change that. Nonetheless, I’d like a chance to explain why this statement matters.

For academic professionals in the information security field, the relationship with NSA has always been a bit complicated. However, for the most part the public side of that relationship has been generally positive. Up until 2013 if you’d asked most US security researchers for their opinions on NSA, you would, of course, have heard a range of views. But you also might have heard notes of (perhaps grudging) respect. This is because many of the NSA’s public activities have been obviously in everyone’s interest — helping to fund research and secure our information systems.

Even where evidence indicated the possibility of unfair dealing, most researchers were content to dismiss these allegations as conspiracy theories. We believed the NSA would stay between the lines. Putting backdoors into US information standards was possible, of course. But would they do it? We thought nobody would be that foolish. We were wrong.

In my opinion this letter represents more than just an appeal to conscience. It measures the priceless trust and goodwill the NSA has lost — and continues to lose while our country fails to make serious reforms to this agency.

While I’m certain the NSA itself will survive this loss of faith in the short term, in the long term our economic and electronic security depend very much on the cooperation of academia, industry and private citizens. The NSA’s actions have destroyed this trust. And ironically, that makes us all less safe.

Hey Amazon: Banning Security Researchers Isn’t Making Us Safer

Readers of this blog may recall that I’m a big fan of the RSA-key ‘cracking’ research of Nadia Heninger, Zakir Durumeric, Eric Wustrow and Alex Halderman. To briefly sum it up: these researchers scanned the entire Internet, discovering nearly 30,000 weak RSA keys installed on real devices. Which they then factored.

In the fast-paced world of security, this is already yesterday’s news. The problems have been responsibly disclosed and repaired, and the manufacturers have promised not to make, well, this particular set of mistakes again. The research even received the Best Paper award at Usenix Security.** So you might ask why I’m writing about it now. And the answer is: I’m not.

What I’m writing about today is not the research itself, but rather: the blowback from the research. You see, Heninger et al. were able to conduct their work mostly thanks resources rented from Amazon’s Elastic Compute Cloud (EC2). And in response, Amazon has booted them off the service.

This is a real drag, and not just for the researchers in question.

Cloud services like EC2 are a huge resource for ethical security researchers. They help us to learn things about the Internet on a scale that we could never accomplish with the limited resources in most university labs. Cloud services also give us access to software and hardware that would be nigh on impossible to justify to a grant committee, stuff like GPU cluster instances which are invaluable to cryptographers who want to run specialized cracking tasks.

But more importantly: the rise of cloud computing has given rise to a whole new class of security threat: things we never had to worry about before, like side-channel and covert channel attacks between co-located VMs. Securing the cloud itself requires real-world analysis, and this means that researchers have to be trusted to do some careful, non-malicious work on actual platforms like EC2. Unfortunately this is just kind of research that the Heninger et al. ban could serve to discourage.

Now, I don’t pretend that I know all the details of this particular case. I haven’t spoken to the researchers about it, and although the paper makes their scan seem pretty benign, it’s always possible that it was more aggressive than it should have been.*

Moreover, I can’t challenge Amazon’s right to execute this ban. In fact their Acceptable Use Policy explicitly prescribes security scans under a section titled ‘No Security Violations’:

  • Unauthorized Access. Accessing or using any System without permission, including attempting to probe, scan, or test the vulnerability of a System or to breach any security or authentication measures used by a System.

The question here is not whether Amazon can do this. It’s whether their — or anyone else’s — interests are being served by actually going through with such a ban. The tangible result of this one particular research effort is that thousands of vulnerable systems became secure. The potential result of Amazon’s ban is that millions of systems may remain insecure.

Am I saying that Amazon should let researchers run amok on their network? Absolutely not. But there has to be a balance between unfettered access and an outright ban. I think we’ll all be better off if Amazon can clearly articulate where that balance is, and provide us with a way to find it.

Update (9/3): Kenn White points me to this nice analysis of the public EC2 image-set. The authors mention that they worked closely with Amazon Security. So maybe this is a starting point.

Notes:

* Admittedly, this part is a little bit ambiguous in their paper. NMAP host discovery can be somewhere between gentle poke and ‘active scrub’ depending on the options you’ve set.

** In case you haven’t seen it, you may also want to check out Nadia’s (NSFW?) Usenix/CRYPTO rump session talk.

The first rule of vulnerability acknowledgement is: there is no vulnerability acknowledgement

Just for fun, today we’re going to look at two recent vulnerability acknowledgements. The first one’s pretty mild; on the Torino scale of vulnerability denial, it rates only about a three:

The research team notified Amazon about the issues last summer, and the company responded by posting a notice to its customers and partners about the problem. “We have received no reports that these vulnerabilities have been actively exploited,” the company wrote at the time. 

But this one from RSA, wow. The charts weren’t made for it. I suggest you read the entire interview, perhaps with a stiff drink to fortify you. I warn you, it only gets worse.

If our customers adopted our best practices, which included hardening their back-end servers, it would now become next to impossible to take advantage of any of the SecurID information that was stolen.

… We gave our customers best practices and remediation steps. We told our customers what to do. And we did it quickly and publicly. If the attackers had wanted to use SecurID, they would want to have done it quietly, effectively and under the covers. The fact that we announced the attack immediately, and the fact that we gave our customers these remediation steps, significantly disadvantaged the attackers from effectively using SecurID information.

… We think because we blew their cover we haven’t seen more evidence [of successful attacks].

I have a paper deadline midweek, so blogging will be light ’til then. Once that’s done, I’ll have something more substantial to say about all this.