07 Sep An e-SOS for Cyberspace
I’ve got a new draft article on cyberthreats (you can download it at SSRN here). I’d planned to wait before blogging about it, but events have overtaken my plans since Orin Kerr and Dave Hoffman are already discussing my ideas over at Concurring Opinions. So, let me offer some responses to their questions here, and in the process explain (a) why some new regulatory process is needed for severe cyberthreats; and (b) why my idea may represent the best (if not the only) currently feasible regulatory response.
But first, here’s my abstract:
Individuals, shadowy criminal organizations, and nation states all now have the capacity to harm modern societies through computer attacks. These new and severe cyberthreats put critical information, infrastructure, and lives at risk. And the threat is growing in scale and intensity with every passing day.
The conventional response to such cyberthreats is self-reliance. When self-reliance comes up short, states have turned to law for a solution. Cybercrime laws proscribe individuals from engaging in unwanted cyberactivities. Other international laws proscribe what states can (and cannot) do in terms of cyberwarfare. Both sets of rules work by attribution, targeting bad actors – whether criminals or states – to deter cyberthreats.
This Article challenges the sufficiency of existing cyber-law and security. Law cannot regulate the authors of cyberthreats because anonymity is built into the very structure of the Internet. As a result, existing rules on cybercrime and cyberwar do little to deter. They may even create new problems, when attackers and victims assume different rules apply to the same conduct.
Instead of regulating bad actors, this Article proposes states adopt a duty to assist victims of the most severe cyberthreats. A duty to assist works by giving victims assistance to avoid or mitigate serious harms. At sea, anyone who hears a victim’s SOS must offer whatever assistance they reasonably can. An e-SOS would work in a similar way. It would require assistance for cyberthreat victims without requiring them to know who, if anyone, was threatening them. An e-SOS system could help avoid harms from existing cyberthreats and deter others. Even when cyberthreats succeed, an e-SOS could make computer systems and networks more resilient to any harm they impose. At the same time, an e-SOS would compliment, rather than compete with, self-reliant measures and the existing legal proscriptions against cyberthreats.
Orin has questioned if a duty to assist (DTA) will work “as it seems to be based on assumptions about the physical world that don’t translate to the Internet.” Orin suggests, for example, that defining what “is a ‘severe’ computer crime” will be much harder than defining what is sufficient distress to trigger a DTA at sea. I agree with him in one respect, and disagree in another. For starters, I think Orin’s question–in its reference to computer “crime”–contains its own assumptions about regulations in the physical world that do not, in fact, translate to the Internet. Indeed, my paper’s first claim is that the ability of high-level cyberattackers to remain anonymous means traditional methods of proscribing bad actors from unwanted behavior will not work in cyberspace. Victims will often not know when they are the victim of a severe computer crime, because if the perpetrator is a nation state, cybercrime rules do not apply. Instead, any relief for victims would come under the applicable rules governing the use of force or nonintervention. Of course, given anonymity, the victim may not be able to count on those rules applying either. Simply put, victims may never know if their problem is the result of computer error, criminal behavior or a state cyberattack. Thus, I agree with Orin that defining what constitutes a severe cybercrime is difficult, if not impossible in most cases.
Where I disagree with Orin, however, is the idea that nation states could not agree on what constitutes a severe cyberthreat in the same way that they have done so at sea and in other DTA contexts. I have three quibbles with Orin on this point. First, I think Orin may be oversimplifying the ease with which the SOS is triggered at sea. Yes, you can send one when a ship with 100 people on it is sinking, but the SOS is not so limited. It applies to all cases of a vessel “in distress.” That term is not defined with any specificity, because negotiators did not want to try to peg all the physical manifestations in which a life or vessel could be threatened. As a result, I think the SOS has a flexibility to it that is entirely suitable for adoption in cyberspace. Some threats may only directly affect the confidentiality, integrity, availability or authenticity of computer data. But others will have additional indirect effects that involve physical harm, including legitimate fears of loss of life or systemic effects on infrastructure. For me, these later examples are not too far afield from the sorts of harms that can flow from storms or pirates. Thus, I do think the analogy between the two environments is an appropriate one.
Second, although I agree with Orin that defining severity in cyberspace is still not easy, I do not think the task is impossible. Nation states could, for example, define severity in terms of targets, rather than effects. Thus, any computer problems facing a hospital’s system or a SCADA system running a hydroelectric dam or a nuclear power plant could trigger a DTA. This would avoid the hacker problem that Orin identifies; victims wouldn’t have to sort out the mischievous (but harmless) hack from more malicious (and thus dangerous) attacks or exploits.
Third, even if the e-SOS is not limited to certain targets, states could define severity in terms of effects. My paper proposes three variables–the timing of the threat, its scale, and its indirect effects–to do so. Thus, an immediate loss of availability of a bunch of the SCADA systems running the U.S. power grid would be a severe cyberthreat under my test (the effects are immediate, widespread, and indirectly will cause a loss of power that will likely lead to loss of life and other property damage). A similar analysis suggests an immediate loss of integrity in computer networks governing currency markets would be severe. Even threats that do not last long or which are not extensive in scale, can still be severe based on their indirect effects. Spanair Flight 5022, for example, crashed because malware infected the maintenance system, leading to death and destruction. Of course, there will be harder cases–Google’s losses under Operation Aurora are not as easily defined as severe since that exploit did not produce death or systemic losses, but my paper explores arguments for (and against) qualifying it as such.
Orin (and Dave) separately take issue with my suggestion that the obligation to assist be defined by physical proximity. At sea, anyone who hears the SOS call has a duty to assist, not just those closest to the vessel in distress. But, I take Orin’s point that those who can actually help will usually be those closest to the threat physically (although Coast Guard helicopters, etc. mean that this will not always be true). I also agree with Orin and Dave that regulating who can assist in cyberspace is a harder proposition, since the physical limitations on who can assist are absent. In cyberspace, an e-SOS could theoretically reach anyone, and if the DTA is not limited to specific duty-bearers, everyone would be obligated to respond. Thus, my paper proposes several ways to limit assistance to avoid the costs of imposing the duty too widely. I do suggest that physical proximity may work, by which I mean proximity to the victim’s systems and networks that have encountered losses in confidentiality, availability, integrity or authenticity. I rely on Jack Goldsmith and Tim Wu’s ideas here that the Internet has allowed enough regulability by nation states so that a nation state where victims have suffered (or are suffering) losses could assist them even if it had nothing to do with the threat itself. Thus, a victim could send out an e-SOS that requires the nation state where the losses lie to respond and perhaps others in that jurisdiction as well (e.g., ISPs using networks in that state, major Internet companies who also have terminals or networks resident in that state, etc.).
I don’t think it’s fair, however, to read my paper as wedded to the idea of physical proximity; indeed, I make clear that “geographic or jurisdictional links between the victim and the duty-holder are not the only–nor necessarily the best–ways to identify duty-bearers online.” Instead, I propose using what I call technical proximity to the victim as a way to identify a duty-holder. For example, if a DDoS transits Comcast’s network, Comcast could be required on receipt of an e-SOS to assist in ceasing that traffic. Or, where the victim traces an attack to a nation state, that state would be obligated to assist (even if they were only the last in several stepping stones from the attack’s true source). This would mean, for example, that Russia would have had to block traffic routed through its networks attacking Estonia in 2007, whether or not Russia was responsible for that traffic. I also suggest tiering the DTA, so that there could be a series of first responders, who could call for additional help if the threat proved so drastic as to require spreading the pool of duty-bearers.
All in all, I take Orin’s point that caution is warranted in trying to translate legal concepts from the physical world into cyberspace. But, I do think that careful consideration reveals the merits of a duty to assist. Neither self-reliance nor existing regulatory responses are anywhere near adequate responses to the most severe cyberthreats today (and I do think attacks that could disable SCADA systems or DDoS attacks that take down national access to the information networks are severe). If we need new law, I don’t see that law coming from better proscriptions since anonymity saps them of any real deterrent force. Nor do I see new law coming through regulation of the technology itself, given concerns with privacy and civil liberties. Thus, the only way to regulate this problem lies on the victim-side of the equation. Obviously, we could mandate minimum security requirements to harden victims when they are targets (and I have no quarrel with that although others do not like its privacy or civil liberties’ implications). My idea, however, is to try a different approach that lets victims facing problems call for help when they encounter problems that nation states have agreed are unwanted. Moreover, I don’t think we need to have victims identify why they’re having a problem; it’s the injury they face rather than the fact of a computer error, criminal activity, or state-sponsored attack that would trigger a DTA. An e-SOS could thus work within the confines of the OSI Model that governs the Internet today in ways akin–but obviously distinct from–the SOS at sea.
Thanks for the thoughtful response, Duncan.
Can you post a plain PDF at some link (not through SSRN)? I can’t get SSRN to work for me.
Here’s the link again; I had somone check it, and it should download OK now.
Thanks for the attempt, Duncan. I’m sorry to report that I still can’t access the PDF at that link. I’ve never been able to download full-text papers from SSRN. I have no clue why. SSRN uses some complex Javascript (for some reason they don’t just link to a PDF), which doesn’t work for me, and I don’t have the time to diagnose the bugs in their code for them. (Asking SSRN to email it to my email address doesn’t seem to work, either.) It’s not just your paper — it is SSRN-wide. I guess I seem to be unusual in having these problems with SSRN; I don’t know why.
Any chance you can host the PDF file on your own site (not on SSRN)?
It may take a day or two, but I’ll try to post a .pdf and link to it here.