Duncan B. Hollis Responds to Professors Eric Jensen and Jonathan Zittrain
[This post is part of the Second Harvard International Law Journal/Opinio Juris Symposium.]
First, I want thank both Eric Jensen and Jonathan Zittrain for taking the time to respond to my article. Both have thought long and hard (not to mention well!) about regulating cyberspace. Eric’s early work assessing computer network attacks under the legal rules on use of force was one of the foundational pieces on which I based my own scholarship. More recently, I’ve been inspired by Jonathan’s efforts to grapple both theoretically and technically with the challenges of cyberspace.
So, I was very pleased to see in both of their comments that we all three share certain assumptions. Three in particular stood out: 1) cyberthreats are a real problem; 2) we need better responses to this problem, and 3) attribution makes the traditional governmental “proscriptive” response (namely, identifying and punishing wrongdoers) very very difficult.
That said, particularly, with Jonathan, I think we do have some differences in our starting positions. Jonathan suggests that my worries may be a bit more at the “hawkish end of the spectrum” than his own, which explains his preference for community based mutual aid arrangements (such as his “mirror as you link” concept) in lieu of my international legal duty to assist (DTA). I take Jonathan’s point on both my hawkishness and my turn towards law over community norms. My paper readily acknowledges that there is still some dispute over the existence and extent of the cyberthreat, with charges of scaremongering facing off against those, like President Obama, who characterize cyberthreats as “among the most serious economic and national security risks we face as a nation.” I expect those within the scaremongering camp are likely to perceive my e-SOS idea as a solution in need of a problem (on the other hand, as far as solutions go, I would hope critics would at least acknowledge that mine is more libertarian and less heavy-handed than those who would rewire the Internet to remove anonymity or allow government monitoring of private networks, etc.).
Similarly, my paper does assume that law can and should be part of the response to the cyberproblem. I am less sanguine than Jonathan who thinks that we can regulate cyberthreats solely through community norms. To be clear, I think Jonathan’s “mirror as you link” idea is a great one; both normatively desirable and practically possible (in many ways, I think his idea is a fellow traveller with my e-SOS proposition) Still, his proposal seems designed with only one type of cyberthreat–denial of service–in mind, and I’m not sure how it would deal with other types of attacks. I also am not sure how feasible community norms are in an environment that seems to have an increasingly diverse and adversarial set of actors (whether Russian hacktivists, military forces from China and the United States, Israeli teenagers, etc.). Rather than limiting solutions to informal networks of like-minded groups, I believe law offers a vehicle for obtaining pre-commitments from state actors who might otherwise not be inclined to cooperate (indeed, international law is nothing, if not a vehicle for solving these sorts of cooperation problems). Finally, I looked to law because my sense is that states themselves have begun to do so. Although initially resistant to negotiating rules of the road, reports suggest that the United States has now come around to the idea of international negotiations on this topic (joining Russia which has touted the idea for more than a decade), although the substance of those negotiations remains very much up for grabs. Hence, my paper is not focused so much on the question of whether international law should regulate cyberthreats, but how it should do so.
And my own response to this latter question ends up being, “if not a duty to assist, then what”? Notwithstanding Eric’s point about the need to flesh out the exact parameters of any international legal duty to assist, I’m still persuaded that it remains the best available legal option for dealing with the most severe cyberthreats. My paper looks at the three other ways law might regulate this threat — (1) regulating the bad actors, (2) regulating the technology, or (3) regulating the victims — and explains how the attribution problem takes the first option off the table while political and economic barriers have so far stymied pursuit of the second and third possibilities. We are in a situation where one cannot identify, let alone prosecute, the bad actors and where you can’t perfect the technology to block them. As a result, we are in a situation where the best the law can do is try to provide assistance to mitigate the harm these threats cause, and if it does so successfully, maybe deter future such threats.
Thus, I take Eric’s point that the analogy between threats in cyberspace vs. those at sea is nowhere near perfect. But I do think the conditions that led to an SOS in the latter context make it a worthy idea for cyberspace. Like the high seas, we have an environment where no single nation can regulate the problem (the high seas are the quintessential commons), where bad actors cannot be proscribed (how do you prohibit hurricanes?), and where the technology can never be fully secured (as the Titanic so dramatically revealed, no boat is unsinkable). In any case, the idea of a duty to assist is not limited to the oceans. As my paper details, there are myriad other contexts in which DTAs exist, proving its utility as a broader legal device. Of course, I don’t believe Eric is per se opposed to these sorts of analogies; he just (rightly I think) seeks to explore how well they might work by flagging problems of proximity, frequency and technology protection.
In terms of proximity, Eric joins Orin Kerr and Dave Hoffman in noting that the physical proximity that motivates the SOS system is absent in cyberspace. I think my response to them serves just as well here:
Orin (and Dave) separately take issue with my suggestion that the obligation to assist be defined by physical proximity. At sea, anyone who hears the SOS call has a duty to assist, not just those closest to the vessel in distress. But, I take Orin’s point that those who can actually help will usually be those closest to the threat physically (although Coast Guard helicopters, etc. mean that this will not always be true). I also agree with Orin and Dave that regulating who can assist in cyberspace is a harder proposition, since the physical limitations on who can assist are absent. In cyberspace, an e-SOS could theoretically reach anyone, and if the DTA is not limited to specific duty-bearers, everyone would be obligated to respond. Thus, my paper proposes several ways to limit assistance to avoid the costs of imposing the duty too widely. I do suggest that physical proximity may work, by which I mean proximity to the victim’s systems and networks that have encountered losses in confidentiality, availability, integrity or authenticity. I rely on Jack Goldsmith and Tim Wu’s ideas here that the Internet has allowed enough regulability by nation states so that a nation state where victims have suffered (or are suffering) losses could assist them even if it had nothing to do with the threat itself. Thus, a victim could send out an e-SOS that requires the nation state where the losses lie to respond and perhaps others in that jurisdiction as well (e.g., ISPs using networks in that state, major Internet companies who also have terminals or networks resident in that state, etc.).
I don’t think it’s fair, however, to read my paper as wedded to the idea of physical proximity; indeed, I make clear that “geographic or jurisdictional links between the victim and the duty-holder are not the only–nor necessarily the best–ways to identify duty-bearers online.” Instead, I propose using what I call technical proximity to the victim as a way to identify a duty-holder. For example, if a DDoS transits Comcast’s network, Comcast could be required on receipt of an e-SOS to assist in ceasing that traffic. Or, where the victim traces an attack to a nation state, that state would be obligated to assist (even if they were only the last in several stepping stones from the attack’s true source). This would mean, for example, that Russia would have had to block traffic routed through its networks attacking Estonia in 2007, whether or not Russia was responsible for that traffic. I also suggest tiering the DTA, so that there could be a series of first responders, who could call for additional help if the threat proved so drastic as to require spreading the pool of duty-bearers.
Next, Eric suggests that the SOS only works because of how infrequently it is invoked, worrying that its use could not be properly limited in the e-SOS context where there are so many cyberattacks (a worry Jonathan shares with his concern that states will view too many cyberattacks as severe based on the target rather than the effects). I’m not sure that the frequency dilemma is as great as Eric suggests. Threats at sea are actually quite common, even if most do not require a distress call. And even distress calls are far from rare. Consider the United States as an example; according to this paper, in 2003, the U.S. Coast Guard received 31,562 distress calls, saving an estimated 5104 lives with 655 lives lost and 481 unaccounted for.
And, to be clear, my proposal is not to deal with every cyberattack or exploit, which I agree number in the millions or maybe even billions, but only those that states would agree are “severe.” My paper explores the severity of an attack along three dimensions — timing, scale and indirect effects — and contemplates different ways that states might delineate which attacks are severe. Unlike Jonathan, I’d be inclined to let states themselves define severity, and would have no problem if they did so based on the effects (loss of life, disruption of critical infrastructure) or the targets (hospitals). Similarly, I think there are various ways states can deal with limiting the burdens of assistance from falling on any specific sub-group of actors (like the National Security Agency), whether through the ideas of technical proximity or tiering mentioned above. Thus, I would argue that there are various ways states can work around proximity and frequency issues, with any such work-arounds ultimately turning on the states’ collective assessment of how severe the threats are and who should bear the costs of assisting.
Finally, Eric worries that technology transfer issues will disincentive assistance from more sophisticated helpers who fear that in helping they’ll be revealing too many of their technical “sources and methods.” I agree that technology transfer is an issue, although I think it may actually cut both ways; currently, most victims don’t ask for help because they’re worried about having to expose their own operations to the world, and particularly to anyone who assists. The essence of the e-SOS idea, however, is that it does not require victims to ask for help; they only do so if they feel the costs/benefits warrant making the call. Just like an Ambassador can allow the embassy to burn to the ground, we should expect some victims will decide there’s too much additional risk in asking for help and continue to suffer in silence. On the other hand, Google’s call for help in the Aurora incident shows how even the most sophisticated Internet actor may at some point cry “Uncle,” and negotiate terms for aid (which it did in a deal with the National Security Agency). Similarly, duty-bearers might be allowed to say up front what kind of assistance they can provide and under what conditions they will do so. Indeed, there are other DTAs in existence (notably in the nuclear context) where states work out in advance what assistance would be available and how it would be provided in the event of a crisis. Something similar could be replicated for the most severe cyberthreats, including conditions to limit any exploitation of the helper’s systems and networks by the victim or vice versa.
In closing, let me reiterate my thanks to Eric and Jonathan for taking the time to think about my idea. Frankly, I hope they’re not the only ones who do so. I sincerely believe that if law is going to be devised to regulate future cyberconflicts, a duty to assist, or an e-SOS could (and should) be a significant first principle for mitigating and hopefully deterring the most severe cyberthreats.