Earlier today, Motherboard posted a court document filed in a prosecution against a Silk Road 2.0 user, indicating that the user had been de-anonymized on the Tor network thanks to research conducted by a “university-based research institute”.
As Motherboard pointed out, the timing of this research lines up with an active attack on the Tor network that was discovered and publicized in July 2014. Moreover, the details of that attack were eerily similar to the abstract of a (withdrawn) BlackHat presentation submitted by two researchers at the CERT division of Carnegie Mellon University (CMU).
A few hours later, the Tor Project made the allegations more explicit, posting a blog entry accusing CMU of accepting $1 million to conduct the attack. A spokesperson for CMU didn’t exactly deny the allegations, but demanded better evidence and stated that he wasn’t aware of any payment. No doubt we’ll learn more in the coming weeks as more documents become public.
You might wonder why this is important. After all, the crimes we’re talking about are pretty disturbing. One defendant is accused of possessing child pornography, and if the allegations are true, the other was a staff member on Silk Road 2.0. If CMU really did conduct Tor de-anonymization research for the benefit of the FBI, the people they identified were allegedly not doing the nicest things. It’s hard to feel particularly sympathetic.
Except for one small detail: there’s no reason to believe that the defendants were the only people affected.
If the details of the attack are as we understand them, a group of academic researchers deliberately took control of a significant portion of the Tor network. Without oversight from the University research board, they exploited a vulnerability in the Tor protocol to conduct a traffic confirmation attack, which allowed them to identify Tor client IP addresses and hidden services. They ran this attack for five months, and potentially de-anonymized thousands of users. Users who depend on Tor to protect them from serious harm.
It’s quite possible that the CMU researchers exercised strict protocols to ensure that they didn’t accidentally de-anonymize innocent bystanders. This would be standard procedure in any legitimate research involving human subjects, particularly research that has the potential to do harm. If the researchers did take such steps, it would be nice to know about them. CMU hasn’t even admitted to the scope of the research project, nor have they published any results, so we just don’t know.
While most of the computer science researchers I know are fundamentally ethical people, as a community we have a blind spot when it comes to the ethical issues in our field. There’s a view in our community that Institutional Review Boards are for medical researchers, and we’ve somehow been accidentally caught up in machinery that wasn’t meant for us. And I get this — IRBs are unpleasant to work with. Sometimes the machinery is wrong.
But there’s also a view that computer security research can’t really hurt people, so there’s no real reason for sort of ethical oversight machinery in the first place. This is dead wrong, and if we want to be taken seriously as a mature field, we need to do something about it.
We may need different machinery, but we need something. That something begins with the understanding that active attacks that affect vulnerable users can be dangerous, and should never be conducted without rigorous oversight — if they must be conducted at all. It begins with the idea that universities should have uniform procedures for both faculty researchers and quasi-government organizations like CERT, if they live under the same roof. It begins with CERT and CMU explaining what went on with their research, rather than treating it like an embarrassment to be swept under the rug.
Most importantly, it begins with researchers looking beyond their own research practices. So far the response to the Tor news has been a big shrug. It’s wonderful that most of our community is responsible. But none of that matters if we look the other way when others in our community fail to act responsibly.
7 thoughts on “Why the Tor attack matters”
Hi Matthew, in regards to paragraph 4, I believe the drug suspect was a separate individual from the child pornography suspect. Thank you for writing this. It's very unsettling that CMU did this.
Thanks — I corrected it.
This research sounds about as legitimate as “scientific” whaling.
SEI is the Software Engineering Institute associated with CMU. CERT is the cyber security research team, a government funded organization, dealing with cyber security challenges. When I was with CERT we did work on things like the TJMaxx breach and reverse-engineering malware. Although the building is on the CMU campus I was unaware of a strong connection, certainly not one strong enough to claim that this was a “CMU effort”. That might have changed but I doubt it. (http://www.cert.org)
We worked on many areas, such as tools for Law Enforcement (e.g. finding stolen credit card data) and deep research subjects (e.g. Function Extraction to automate reverse engineering of malware). (http://daly.axiom-developer.org/TimothyDaly_files/publications/sei/HICSS44ComputingtheBehaviorofMalwareV2.pdf)
I no longer work there so I have no knowledge of how, what, or why CERT might have been associated with Tor. However, CERT was a deep pool of expert knowledge so it wouldn't surprise me if they were asked to help. Indeed, Greg Shannon from CERT is now the assistant director for cybersecurity strategy with the White House.
You write “It begins with CERT and CMU explaining what went on with their research, rather than treating it like an embarrassment to be swept under the rug.” Given that researchers like me had Top Secret clearances it isn't a matter of “sweeping under the rug”. Odds are good (although I do not know) that some information is classified so this isn't just an “embarrassment” issue.
You write “we have a blind spot when it comes to the ethical issues in our field”. I was involved in several deep discussions of ethical questions while at CERT so I have first-hand knowledge that it does arise. Ethics, by my definition, is “what you would do if everyone, everywhere knew what you were doing and why”. This is the “public disclosure criteria” (see the “Lying” and “Secrets” books by Sissela Bok). Just because you don't know what is done does not imply that the behavior is ethical.
If my collection of how the attack worked is correct, CMU performing the active/tagging part of the attack would have allowed other entities to passively deanonymize users.
You bring up a coupe of important points. Let me try to tackle them both.
First, CERT was willing to submit a presentation about their work to Blackhat. I find it difficult to believe that even the most clueless researcher would submit such a presentation if the underlying work was Top Secret. If they did, then CERT should probably be subject to a different sort of investigation. Given that the research was at one point considered acceptable for publication, I think it's reasonable that we could learn about the unclassified portions of this research to see what safeguards they put in place.
More generally, I reject the idea that universities should operate in-house subdivisions that adhere to a completely different ethical standard than the institution would expect of its 'real' researchers. This is a recipe for abuse. I say CMU and not 'SEI' or 'CERT' in my post, because CMU is very clearly benefiting from the prestige and funding of hosting a group like CERT — they put their name all over it, and presumably charge overhead. However, they apparently want only the benefits of that association. When it comes to enforcing responsible research practices on SEI/CERT, CMU suddenly has no control.
If you think this is unfair, consider the following hypothetical: imagine that a reputable University hospital hosted a government-funded subdivision that was performing unapproved human experiments. How long do you think that situation would be accepted? Would the division be able to refuse handing over evidence that it was complying with the human subjects regulations on grounds of 'national security'? I submit that such a division would be rapidly forced out of a hospital, and possibly out of existence.
You might say that medical research and CS research have different ethical standards. And that would be exactly the point I was trying to make in this post.
CMU is a private institution, but do any FOIA-like laws apply to it? (It takes funding from public sources.) How transparent are university IPRBs in general?
Comments are closed.