The Fog of Technology and International Law

by Duncan Hollis

[Note: This piece is cross-posted to the SIDIblog, the blog of the Italian Society of International Law, which was kind enough to ask for my views on these topics; for those interested in their other posts (in multiple languages), see here.]


  • War is the realm of uncertainty; three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty.

Carl von Clausewitz, Vom Kriege (1832), Bk. 1, Ch. 3.

  • It is a cruel and bitter truth that in the fog of war generally and our fight against terrorists specifically, mistakes — sometimes deadly mistakes — can occur.  But one of the things that sets America apart from many other nations, one of the things that makes us exceptional is our willingness to confront squarely our imperfections and to learn from our mistakes. 

U.S. President Barack Obama, April 23, 2015

I arrived in Rome for a month-long visit at LUISS Universita Guido Carli to find a country wrestling with the tragic news of the death of one of its own – Giovanni Lo Porto.  As President Obama himself announced, the United States inadvertently killed Lo Porto and Warren Weinstein, a USAID contractor, as part of a January drone strike targeting an al Qaeda compound in the Afghanistan-Pakistan border region.   Both aid workers were Al Qaeda hostages; Lo Porto had been kidnapped in 2012, while Weinstein was abducted in 2011.

The story made global headlines for Obama’s apology that the United States had not realized these hostages were hidden on-site, and thus their deaths were a tragic mistake:

As President and as Commander-in-Chief, I take full responsibility for all our counterterrorism operations, including the one that inadvertently took the lives of Warren and Giovanni.  I profoundly regret what happened.  On behalf of the United States government, I offer our deepest apologies to the families.

President Obama directed a “full review” of the strike, and there are calls for other investigations as well, including here in Italy.

Amidst this tragedy – and some of the apparent missteps by the U.S. (not to mention Pakistani) governments (painfully noted by Mr. Weinstein’s family) — there is something remarkable in the Obama statement.  Unlike so many other reports of U.S. errors or controversial programs in recent years (think Wikileaks or this guy), here was the U.S. Government, on its own, declassifying and disclosing the facts surrounding a drone strike that by all accounts appears to have included a major mistake in its execution.  For lawyers, moreover, such disclosures are critical – without them we are left with what I’d call the “fog of technology” which precludes the application of the rule of law in an open and transparent way.

Clausewitz’s concept of the “fog of war” is simple, and well known:  it describes the situational uncertainty that military actors face, their lack of perfect information about an adversaries’ intentions and capabilities (not to mention incomplete knowledge of their allies’ intentions and capabilities).   What looks good on paper before an armed conflict may prove unworkable as the conditions of war – physical hardship, the need for immediate decision-making, emotional strains, etc. – complicate decision-making, and with it, the achievement of military objectives.

I use the term “fog of technology” to identify a similar situational uncertainty that lawyers face when confronting the deployment of new technology.  Simply put, new technology can cloud how lawyers understand the content of law.  Of course, lawyers can assess new technology and find it analogous to prior cases, allowing for what I call “law by analogy”, where the nature or function of a new technology is regulated according to how an analogous technology or function has been regulated in the past.  But the more novel the technology – the more it can function in non-analogous ways, or with effects previously unimagined – the more lawyers may (or at least should) struggle with interpreting and applying the law to it.

Now, the fog of technology can emerge in all sorts of legal systems and all sorts of contexts from 3D printing to nanotechnology to driverless cars.  But President Obama’s explicit reference to Clausewitz makes me think about it in the particular context of warfare itself.  We are very much in a fog of technology when it comes to applying law to modern conflicts, whether it’s the remotely-piloted drone that killed Lo Porto and Weinstein, Stuxnet, or rumors of truly autonomous weapon systems (or “killer robots”).  Which domestic and international legal frameworks regulate the deployment of these technologies?  Does international humanitarian law (IHL) govern these operations, and, if so, does it do so exclusively, or do other regimes like international human rights apply as well?  To the extent a specific regime applies – IHL – how do its rules on things like distinction or neutrality apply to technologies and operations that may have no prior analogues?  More specifically, how does the law treat specific cases – was the killing of Lo Porto and Weinstein, tragic but legal, or was it an internationally wrongful act?

Of course, technology is not the only reason we have such questions.  Indeed, several scholars (most notably Michael Glennon) have identified the idea of a “fog of law.”  The rise of new types of non-state actors such as Al Qaeda continue to generate legal uncertainty; more than a decade after September 11, debates persist over whether and when U.S. counter-terrorism operations fall within a criminal law framework, or, as the U.S. insists, within the laws of armed conflict.   Similarly, when the United States targets and kills a U.S. citizen abroad (such as Ahmed Farouq, the American affiliated with Al Qaeda, who died in the same strike that killed Lo Porto and Weinstein), the question is not so much how the technology did this, but whether the U.S. Constitution regulates such killing.

Still, I think there are features of technology itself that make lawyering in this context significantly more difficult.  My co-blogger Ken Anderson recently summarized a few of the most important aspects in a recent post at the Hoover Institution.  He identifies several commonalities among cyberweapons, drones, and killer robots:  (i) their ability to operate remotely; (ii) their capacity for extreme precision (at least when compared to earlier weapons); and (iii) the diminished ease of attribution.  Of these, I think the problem of attribution is foundational; law will have little to say if legal interpreters and decision-makers do not know how the technology has been deployed, let alone how it functions or even that it exists in the first place.   In such cases, the fog of technology is tangible.

Consider the story of drones and international law.To date, we have experienced three different layers of technological fog.  The first layer was almost impenetrable.  As rumors first surfaced that States had targeted and killed individuals using unmanned aerial vehicles, States such as the United States refused to acknowledge the technology or its deployment.  It was all classified material.  In this phase, it was unclear whether international law had anything to say and, if it did, which legal frameworks might apply.  Indeed, it was unclear whether States using drones were even assessing their own actions under international law (although as a former government lawyer, I assume that they did so, albeit through their interpretation of what international law requires).

As evidence of drone strikes mounted, from the deaths of wanted terrorists to stories of civilian casualties, we entered a second, slightly less foggy phase.  The United States and other nations began to offer a “general defense” of the use of this technology minus operational details.  As U.S. Legal Adviser, Harold Koh offered the most well-known defense of drone strikes at the 2010 meeting of the American Society of International Law.  Other government interpretations of the domestic and international legal bases for U.S. behavior followed, including speeches by the White House counter-terrorism adviser, John Brennan, in 2011 and 2012, Jeh Johnson, the General Counsel of the Defense Department, and U.S. Attorney General Eric Holder.  By (partially) lifting the veil of secrecy surrounding these strikes, it became possible to have a broader conversation about when and how law (both international law and U.S. constitutional law) applied to drone strikes. Still, the U.S. government looked to avoid debating or litigating specific case, while many facts remained unclear, such as how often and how many civilians had been killed via drones.

I would argue that Obama’s April remarks mark a third, even less foggy, phase – one where a State is admitting responsibility for a specific attack with defined consequences.  Now, we have a concrete case on which to assess the “was this legal” question, a question that was hard to answer in the second phase, and impossible in the first one.  Obama himself explained the pre-conditions for the strike:

Our initial assessment indicates that this operation was fully consistent with the guidelines under which we conduct counterterrorism efforts in the region, which has been our focus for years because it is the home of al Qaeda’s leadership.

And based on the intelligence that we had obtained at the time, including hundreds of hours of surveillance, we believed that this was an al Qaeda compound; that no civilians were present; and that capturing these terrorists was not possible.  And we do believe that the operation did take out dangerous members of al Qaeda.  What we did not know, tragically, is that al Qaeda was hiding the presence of Warren and Giovanni in this same compound.

Such details allow us to inquire about legal responsibility in ways that were impossible when the U.S. Government refused to concede, let alone explain, its operations.  For example, what does it mean when President Obama says that he is “responsible” for Lo Pronto and Weinstein’s deaths? Does it mean that the United States will concede that its operation constituted an internationally wrongful act with all that term implies under the Articles of State Responsibility?  Or, did he accept only moral responsibility, denying any legal liability on the grounds that the United States did everything international humanitarian law requires in its targeting, including compliance with the principles of distinction, proportionality, and necessity?  Does it matter (legally speaking) that this was a “signature strike”, launched because of the movements of military-aged males observed in suspicious activities, rather than a strike where a named target was specifically identified in advance?  The United States may resist answering such questions, or push for specific positions, but it seems welcome news that we’re at least able to move such discourse more into the open.  Indeed, once we start to sort out how the law applies to these new technologies, we can ask the equally important questions about how well it does so.

Thus, the drone story suggests one means of lifting the fog of technology — in phases via internal, policy-level decisions.  Obama explained that he directed the disclosure and declassification in this case because “even as certain aspects of our national security efforts have to remain secret in order to succeed, the United States is a democracy committed to openness in good times and in bad.”  If other States followed suit, we could have a more robust dialogue on the relevant legal rules.  We could even build a catalog of “State Practice” from which to draw further details about the content and scope of international law in this context.

Of course, self-identification is not the only way to pierce the fog of technology.  Indeed, we should expect it to be rare given national security concerns combined with the capacities of these technologies to hide their true origins.  Other means, however, may generate the requisite information for law’s application, most notably technology itself.  Just as some technology facilitates discrete, anonymous behavior, other technologies push information into the open.  Whatever you think of Wikileaks or Edward Snowden, their feats were a function of technology, the ability to store, leak and maintain vast troves of data in cyberspace.  That information in turn allows legal analyses, discussions, and lawsuits that would not have otherwise occurred.   The same is true for State actors.  According to Stewart Baker’s “law,” “Our Security sucks, but so does theirs.”  As a result, States are increasingly able to use technology to learn what other States are doing with their technology, information they appear more and more ready to publicize and act on, whether it was U.S. cybercrime charges against 5 PLA officers in 2014 or the more recent claim that North Korea bears responsibility for the Sony Pictures hack.  This sort of information allows further inquiries into liability and relief (for example, we only can ask whether the United States owes a remedy for the costs of cleaning Stuxnet off of civilian SCADA systems if we can attribute Stuxnet to the United States).

Finally, time itself is a factor here.  Over time, interpretations are proffered, decisions taken, judgements rendered, that make “new” technologies less new, and confusion over their legal regulation less uncertain.  Over time, the law figures out how to deal with new technology, whether to regulate it like airplanes or ban it like bioweapons.  I am inclined to agree with Martti Koskenniemi that we can never fully escape the fog, whether in war, law, or technology itself.  But it does seem time can make it easier to see how the law operates.

So, is the answer here to simply sit back and wait? To let States tell us what they are doing and how it’s legal whenever they are ready to do so?  The deaths of Warren Weinstein and Giovanni Lo Porto strongly suggest a negative answer.  We cannot expect all governments to follow Obama’s lead (or even that Obama will do so in future cases).  But if we are to live in a world governed by law, rather than raw power, a certain amount of clarity is necessary on what States are doing with specific technologies and why they are doing it.  Governments can, and should be, pressed into more open discourse, balancing valid national security concerns with the need for law to regulate State behavior.  Indeed, we need to understand how the law applies to these cases not just to regulate their operation, but to test the law’s effectiveness and sufficiency.  New technologies like autonomous weapons may require more than analogies to existing law; they may require entirely new frameworks.  But we cannot devise such frameworks until we know where things stand at present.  And so, for all the tragedy inherent in the U.S. admission that it killed Giovanni Lo Porto and Warren Weinstein, the case offers a potential bridge. This case serves as an example of the need for a more transparent environment, one where laws do not only operate as (often secret) justifications for important national security efforts, but also open and transparent mechanisms for protecting humanitarian principles in accordance with the basic concept of the rule of law itself.

Comments are closed.