The 1949 Geneva Convention You Probably Haven’t Heard Of

by Kenneth Anderson

It’s the 1949 Geneva Convention on Road Traffic (text at p. 3 of pdf; here’s the UN treaty collection history, signatories, reservations, etc.; here is the Wikisource text of the treaty, which on quick read is accurate) which seeks to promote road safety by establishing uniform rules across borders.  This includes provisions for an international driving permit as well as for cross border recognition of foreign drivers licenses (Florida got itself into problems earlier in 2013 when it issued new regulations requiring foreign drivers, including Canadians, to hold a valid international driving permit; it quickly reversed course). There are later treaties, particularly the 1968 Vienna Convention on Road Traffic, which replaces the 1949 Geneva Convention for contracting states, but it has only 70 ratifications, and the US is not among them, though it is party to the 1949 agreement.

The 1949 Geneva Convention on Road Traffic as well as later agreements on automobiles, licensing, road rules, etc., are probably going to come under greater scrutiny in the next few years on account of the rise of autonomous, self-driving vehicles – the famous Google cars.  As Bryant Walker Smith of Stanford’s Center for Internet and Society notes in a report last November, “Automated Vehicles Are Probably Legal in the United States,” the 1949 convention provides, at Article 8, that every vehicle have a driver who is “at all times … able to control” it.  Smith says in the report that this requirement is likely satisfied if a human is “on the loop” – i.e., able to intervene in the automated vehicle’s operation.  That will likely work as a solution for some period, but the real value of autonomous cars is supposed to eventually be, not when they have a driver ready, alert, and able to take the wheel from the computer, but instead when they are transporting people who can’t or shouldn’t drive: the elderly and infirm, children, and … inebriated undergraduates.

Law and Ethics for Autonomous Weapon Systems

by Kenneth Anderson

Last November, two documents appeared within a few days of each other, each addressing the emerging legal and policy issues of autonomous weapon systems – and taking strongly incompatible approaches.  One was from Human Rights Watch, whose report, Losing Our Humanity: The Case Against Killer Robots, made a sweeping, provocative call for an international treaty ban on the use, production, and development of what it defined as “fully autonomous weapons.”  Human Rights Watch has followed that up with a public campaign for signatures on a petition supporting a ban, as well as a number of publicity initiatives that (I think I can say pretty neutrally) seem as much drawn from sci-fi and pop culture as anything.  It plans to launch this global campaign at an event at the House of Commons in London later in April.

The other was the Department of Defense Directive, “Autonomy in Weapon Systems” (3000.09, November 21, 2012).  The Directive establishes DOD policy and “assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems … [and] establishes guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems.”

By contrast to the sweeping, preemptive treaty ban approach embraced by HRW, the DOD Directive calls for a review and regulatory process – in part an administrative expansion of the existing legal weapons review process within DOD, but reaching back to the very beginning of the research and development process.  In part it aims to ensure that whatever level of autonomy a weapon system might have, and in whatever component, the autonomous function is intentional and not inadvertent, and has been subjected to design, operational, and legal review to ensure that it both complies with the laws of war in the operational environment for which it is intended – and will actually work in that operational environment as advertised.  (The DOD Directive is not very long, and makes the most sense, if you are looking for an introduction into DOD’s conceptual approach, read against the background of a briefing paper issued earlier, in July 2012, by DOD’s Defense Science Board, The Role of Autonomy in DOD Systems.)

In essence, HRW seeks to ban autonomous weapon systems, rooting a ban on autonomous lethal targeting per se in its interpretation of existing IHL while calling for new affirmative treaty law specifically to codify it. By contrast, DOD adopts a regulatory approach grounded in existing processes and law of weapons and weapons reviews.  Michael Schmitt and Jeffrey Thurnher offer the basic legal position underlying DOD’s approach in a new article forthcoming in Harvard National Security Journal, “‘Out of the Loop': Autonomous Weapon Systems and the Law of Armed Conflict.” They say that autonomous weapon systems are not per se illegal under the law of weapons and that their legality or restrictions on lawful use in any particular operational environment depends upon the usual principles of targeting law.

I think Schmitt and Thurnher have it right as a legal matter, but there are important dissenting voices.  A contrary view is offered by University of Miami’s Markus Wagner in, for example, “Autonomy in the Battlespace: Independently Operating Weapon Systems and the Law of Armed Conflict” (chapter in International Humanitarian Law and the Changing Technology of War, 2012).   New School for Social Research professor Peter Asaro (who is not a lawyer, but a philosopher of technology, thus establishing himself as having the Coolest of Jobs, and also co-founder of an organization that has been calling for a ban for several years) has offered a reading of Protocol I and other IHL treaties aiming to show that human beings are built by positive, if tacit, assumption into these texts and their approach to weapons and targeting (forthcoming special section of the International Review of the Red Cross). Asaro is careful to hold out only that this interpretation is implicit, rather than explicit – a thoughtful and creative reading, though not finally one that persuades the hard-hearted lex lata lawyer in me.  A debate is underway in academic law and policy – and in the Real World.  It promises to heat up considerably.

Some months prior to these two documents making their appearance, however, Matthew Waxman and I published a short policy paper in the journal Policy Review, “Law and Ethics for Robot Soldiers.” It made note of arguments by those favoring a complete ban, but mostly focused on the United States (as well as other technologically advanced states; the US is far from the only country doing cutting-edge robotics, in weapons and many other things) and the possibility of developing weapon systems that might move from “automated” to “autonomous.”  That paper endorsed a regulatory approach to these weapon systems, embracing transparency of standards, best practices in weapons reviews, close interaction between the lawyers and engineers from the beginning of weapon system design, etc.  The Policy Review essay was devoted to setting out the problem for a lay audience not having much prior knowledge, however, and oriented toward policy and process issues by which DOD would formulate policy, conduct legal reviews, and how it would deal with other states and their weapon development policies.  It was not primarily directed to arguments for or against a sweeping ban, since HRW had not yet launched its Killer Robots campaign.

Since then, however, Matt and Ken have been busy.  And we’re pleased to announce that the Hoover Institution has just published our new policy essay, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can. It revises and substantially extends our arguments on autonomous and automated robotic weapons, and shifts the focus of argument to address the ban arguments more directly.  Though longer than our first essay, it is still not long (at some 12,000 words) and intended to be readable by a general audience, not an academic one.  It is available at SSRN, here (and the same pdf at the Hoover Institution website, here).

Law and Robotics Conference Seeking Paper Proposals, and HRW’s Tom Malinowski Releases Video That I Will Always Treasure

by Kenneth Anderson

Ordinarily I would leave events posting to our regular postings, but I fell behind and wanted to flag the upcoming Friday deadline for paper proposals for the “Law and Robotics Conference.” It will take place on April 8-9, 2013, at Stanford Law School (the conference follows on the highly successful law and robotics conference that took place at University of Miami last year).  The call for papers says that the conference is open to papers in all fields of law, and specifically mentions international and comparative law, so I thought it would be of interest to OJ readers.  Matthew Waxman and I plan to submit, for example, a proposal on comparing self-driving cars and autonomous weapon systems (I’ve been exploring some of these ideas, brainstorming for the paper, over at Volokh). I am 100% certain the conference will be terrific with outstanding papers and great discussions.  Here is the link if you’re interested.

Meanwhile, over at Lawfare, Human Rights Watch’s Tom Malinowski, Benjamin Wittes, Matthew Waxman, and I have been debating the recent HRW report calling for a ban on “Killer Robots.”  Tom’s latest response – though mostly a serious discussion, well worth reading, though I’m afraid it doesn’t finally manage to persuade me – has a video at the end that I will always, always fondly treasure.  It’s great.   (It’s in Hindi, and though I didn’t know Tom knew Hindi, I’m going to trust his subtitles.)

Another Warbot Metaphor: Nanobot Swarms and Regulatory Challenges

by Chris Borgen

My previous post mentioned battlefield robot analogs of dogs, cheetahs, pack animals, even humans. Now behold the synchronized nanobot swarm

Here’s what national security analyst John Robb had to say about the tactical benefits of a battlefield drone swarm:

•It cuts the enemy target off from supply and communications.
•It adversely impacts the morale of the target.
•It makes a coordinated defense extremely difficult (resource allocation is intensely difficult).
•It radically increases the potential of surprise

Things start to get really interesting when the confluence of two technologies cause even more radical changes. Take, for example, how fabrication technology and micro-drone tech may one day allow new drones to essentially be printed out by fabbing machines.  Not there yet, but perhaps someday.

The underlying issue is that technology is changing so fast, it may be thwarting legal regulation from adequately responding to the implications of technological change. I italicized “may” because I am not certain that this is the case.

Law (and perhaps especially the common law) is propelled by metaphors.  Its timely adaptation to a new technology partially relies on whether an apt metaphor can first orient the regulatory perspective, providing a basic frame for the problem, so that a combination of legislation and judicial interpretation can then fill-in more precise details. 

For example, there were the arguments in the 1990’s (and still today…) over whether the internet is more like a broadcast medium, a mail service, or phone service. In part, the regulation of activiities on the internet has been based on applying various metaphors to different fact patterns, trying to apply old rules and, with some new legislation and interpretation, make them do new tricks. Perhaps this is all that is needed and technology has not left law in the dust.

If that is the case, while battlefield robots may present some new risks, do they actually overturn IHL as we know it? (Similarly do some of the other topics mentioned in the links, such as the implications of DNA hacking, raze pre-existing rules?) Are these actually areas where many whole new areas of substantive rules are needed, or are these examples of areas where regulatory enforcement just got alot harder?

At least regarding IHL, is technological change affecting primarily the substance of law or the enforceability of law, or both equally?  I look forward to any comments from others in the Opinio Juris community…

AlphaDogs, Cheetah-bots, and Mecha-Avatars

by Chris Borgen

Three quick updates from the “robots and warfare” side of things (largely culled from recent Danger Room posts that caught my eye and I wanted to point out to Opinio Juris readers).

I have previously posted about Big Dog, the four-legged beast of burden being developed for use by the U.S. military.  DARPA (the Defense Advanced Research Projects Agency) is now developing (along with Boston Dynamics) AlphaDog, the larger more advanced version of the robot. See this new video of AlphaDog walking around, carrying stuff. Getting closer to actual use in the field…

While the BigDog and AlphaDog videos are interesting for the weird, surreal sensation of watching something that sounds like a lawnmower but walks like a young horse, as a matter of international humanitarian law, they are probably not a big story in and of themselves in the same way that supply trucks are not a “big story” in regards to IHL. However, they do point to advances that may lead to new weapons systems at some point. Quadruped robotic hunters, perhaps running in packs, perhaps autonomous, have been hypothesized. I had previously referred to such robots using William Gibson’s term “slamhounds.” According to Danger Room, DARPA is now working to develop quadruped hunter robots, but is going for a different animal metaphor, calling the project “Cheetah.” Boston Dynamics, the developer of BigDog/ AphaDog, is running the Cheetah project.  Adam Rawnsley of Danger Room writes:

As the name implies, Cheetah is designed to be a four-legged robot with a flexible spine and articulated head (and potentially a tail) that runs faster than the fastest human. In addition to raw speed, Cheetah’s makers promise that it will have the agility to make tight turns so that it can “zigzag to chase and evade” and be able to stop on a dime.

This does have IHL implications.  Its not clear whether the cheetah-bot will be remotely-controlled, in which case the legal issues will be akin to those of UAV’s, or will detect, hunt, and possibly attack autonomously.  That latter issue brings up the knottier question of how you code IHL parameters into software and what types of liability ensues when something goes wrong.

But here’s the piece de resistance, DARPA has

allotted $7 million for a project titled “Avatar.” The project’s ultimate goal, not surprisingly, sounds a lot like the plot of the same-named (but much more expensive) flick.

According [to] the agency, “the Avatar program will develop interfaces and algorithms to enable a soldier to effectively partner with a semi-autonomous bi-pedal machine and allow it to act as the soldier’s surrogate.”

These robots should be smart and agile enough to do the dirty work of war, Darpa notes. That includes the “room clearing, sentry control [and] combat casualty recovery.” And all at the bidding of their human partner.

[Emphasis added.]

You can place a Terminator or Avatar joke here (two James Cameron movies, huh), but I think this is a better film metaphor (and it’s by “District 9 ” director Neill Blomkamp).

So, while these mecha-avatars will not be autonomous, they will be remotely-controlled armed bipedal drones. As a legal matter, one could say that this is similar to our current use of UAV’s.  But I don’t think so. These things would be interacting with humans in close-up situations (“room clearing”) but the operators, perhaps half a world away, would be physically and possibly emotionally removed from the situation.  Cues might be missed. How do instincts translate via a video-link?  The possibility for increased loss of life is very real. This is one development I will want to track.

The Space Bar and the Drone

by Kenneth Anderson

Though I am generally upbeat about the use of drones in military applications, one must recognize design flaws:

The Navy’s latest multi-million pound drone has the unfortunate feature of starting to self-destruct if the pilot accidentally presses the space bar on his keyboard …. The Navy are planning to buy hundreds of drones of the MQ-8B Fire Scout, one of which helicopter almost exploded after the drone’s operator accidentally pressed the space bar with a wire from his headset – which launches the self destruct mechanism on the vehicle.

China’s Drone Production

by Kenneth Anderson

China has been moving to catch up with the US and Israel in production of military UAV drones, reports the Wall Street Journal today, in an article by Jeremy Page (Friday, Nov. 19, 2010, A11).   The article says that the uptick in drone output has surprised the West:

Western defense officials and experts were surprised to see more than 25 different Chinese models of the unmanned aircraft, known as UAVs, on display at this week’s Zhuhai air show in this southern Chinese city. It was a record number for a country that unveiled its first concept UAVs at the same air show only four years ago, and put a handful on display at the last one in 2008.  The apparent progress in UAVs is a stark sign of China’s ambition to upgrade its massive military as its global political and economic clout grows.

I don’t think Western experts should have been all that surprised, at least looking to the long term.  As has been noted repeatedly here at OJ, drones are not some fantastically advanced technology, beyond the reach of all but DARPA.  On the contrary, the avionics and flight control mechanisms have been around for a long time, with tweaks to the basic concept of remote controlled flight provided by advanced in communications and computers.  Sometimes journalists and others make dire predictions about the US or Israel having set off an “arms race” over drone deployments – the US and Israel, then China and Russia, then India and Pakistan …. but this misses the point.  Drones will spread because they will take over significant parts of civil aviation in coming decades, no matter what, and that will be so in any industrialized economy.  The technology is widely available and represents a vast cost savings – military aviation has many additional reasons why drones are useful, but this is part of a broader wave for all aviation.

It is not really all that different from DARPA subsidizing research into self-driving vehicles.  This has obvious applications to urban warfighting, which is why DARPA has funded it for years – but winning researchers from the DARPA competitions for self-driving vehicles have now moved over to work with Google, finally deploying self-driving vehicles on the streets of the Bay Area this year.  It’s not an arms race; it is the future of parts of vehicle automation for both civilian and military vehicles.

The real areas of technical competition in UAVs are not in avionics, nor in the weaponry – though improvements there will make them smaller and more discriminating as well – but in the sensors deployed on the drones.  Sensors are hard – even today, the drone sensors, as far as we know publicly, are still in the range of video.  There’s a lot of room for sophistication.  The Economist had a good article recently on both the difficulties and the gradual improvements in the abilities of robotic “eyes” to “see” things.  That’s the future technical competition in robotics, or at least an important part of it – much less so the avionics.  As for arms races, the true arms race in the military UAV world will not be a race to deploy – everyone who wants UAVs will have them, in various sizes.  The race that matters will be the technological counters to drones – the counter-technologies that will bring them down out of the sky.  That, we have yet to see deployed, but it will arrive very soon.

Search and Rescue and the Spread of UAVs

by Kenneth Anderson

Sorry for the light posting of late – the Anderson family is currently in the Sierra Nevada, the eastern side out of Bishop, California, on God’s own highway, the Empty Quarter of Highway 395, which runs north-south from southern California all the way up the eastern Sierra and beyond.  It is both the most beautiful and most varied countryside you can imagine.  If the gods loved you, you would be here, as are we.  Unfortunately not spending enough time, however, so we are just doing various day hikes.

There is not a lot of international law in the eastern Sierra Nevada.  There is an important body of sovereign nation law, given that there are several Indian tribes and tribal lands up and down the Owens Valley, including the Paiute-Shoshone tribal lands in the center of Bishop.  But one feels somewhat removed from the Law of Nations.  However, I thought I would share one conversation with one of the rangers here in the national park.  She remarked that the ranger services – national parks, national forest, etc. – had been watching with great interest the growth of unmanned aerial vehicles (UAVs) in civilian use.  So far this includes things like crop dusting and surveillance.  Because the air bases that command some of the UAVs are located in Nevada, relatively nearby (in the Empty Quarter, that might mean 300 miles of desert driving, of course), there is a lot of awareness of UAVs and their potential – University of Nevada, Las Vegas has just begun a program to graduate UAV controllers, for example.

When people talk about surveillance UAVs, they are typically thinking about border patrol, but here, the park services are thinking about fire patrols – an immensely expensive task from aircraft now, because of the vast areas to be surveyed in real time – but worth it because the faster the fire is spotted, the better the chance of containing it before it spreads.  LIkewise, search and rescue for lost and injured back country hikers.  That one is somewhat ahead of existing technology, for what the park services would ideally like, because flying in the steep valleys and canyons is difficult and hazardous now, but UAV technology is not sufficiently up to speed to take over those tasks.  But it will happen soon, as smaller UAVs that are more like large birds can be deployed in difficult, deep, or narrow spaces.  Likewise, as the sensor technology gets better, cheaper, and more available, it will be easier to find a single lost hiker using not just things like infrared signatures, but sensor arrays that are … well, if they exist, they are still only available to the military.

Point being – this will not be a surprise to OJ readers who understand that this site is UAV-targeted killing central information station – that UAVs are going to spread rapidly and widely across a huge array of tasks and functions currently carried out by manned aircraft.  It will happen because UAVs will be so much cheaper, efficient, and in many functional aspects superior to using people in airplanes.  The impetus will rapidly turn from being military, as it still is now, to civilian.  Everybody, everywhere in the world will shift that direction.

I raise this because there is a meme that still circulates with some velocity in the international law community, journalists, and others, that the US is risking setting off some kind of UAV arms race by its increasing roboticization of conflict – not just UAVs, but ground vehicles, and so on.  I don’t think that’s right; the meme fundamentally misunderstands the technology and its application.  Rather, UAVs are going to spread across a very wide range of aviation in any case, in which military uses will just be one of them.  The same technology, cost, safety, efficiency, and so on, drivers that push for fire surveillance in the Sierra Nevada will be exactly the same ones that drive the military to use the technology.  One can call it an arms race, I suppose, but only if one imagines that it is all about military use, otherwise it is a misleading way of thinking about the technology.

A better way to think about this is to go back to what make robots robots.  In general, there are three conceptual pieces:  A locomotion function or means of gross movement or action in the world; computing and central processing power to be able to analyze; and sensors to bring in streams of data which, being analyzed, result in some form of gross mechanical action.  (In the case of US military UAVs, we can add an additional piece that brings them into an intersection loosely with ‘cyber’ – the communications net that allows them to be piloted over Afghanistan from the US.)  Focusing on the UAV’s gross locomotion part, the flying part, and saying that it will lead to an arms race in which everyone will want one and arm it with a missile misses the point.  There is no arms race about that – the technology for flying remotely has been around for decades; anyone who wants to build one can do so at a hobby shop.  Putting a missile on it is child’s play, literally – presumably no one would be so politically incorrect as to propose building a Predator with a missile as the next high school robotics competition for high school teams, but apart from political sensitivities aside, one reason is that it’s just too darn easy.  Flying is easy; making a machine that walks up stairs is hard.

Everyone will have UAVs because everyone will want them for so many, many things, mostly unrelated to military or police missions.  Any government that wants to arm one with a missile will have no difficulty doing so.  The real technology issues are not with flying, or with weaponization – or even with computing power.  That’s all off the hobby kit shelf.  No, the real technology issues arise with sensors.  One robotics scientist in Silicon Valley told me last year that it was largely unrecognized, but the real advances in technology of the past decade had not been in computers as such, but in sensors and controllers, ranging from new ways and kinds of bringing data streams online to direct neurological, direct brain control of robotic limbs for amputees, and so on.

But now, note the issue.  Some of this technology is classified for military R&D; other parts are not.  The importance of robots outside of the UAV context are immense in large part because the Baby Boom generation does not have sufficient children to see us off to our reward; we are going to slide into dementia and be cared for and comforted by cuddly robotic dolls that we will think are human, to judge by where things are going in Japan.  In the US, we are not so aware of this, yet, although it is striking that the Times and the WSJ have both moved on in their robotics coverage from targeted killing via UAVs to much more friendly news stories about Alzheimer’s patients in Japan being soothed by robot plush dolphins.  Dolphins that will be smart enough to monitor medical conditions and call 911 if needed, to take obvious examples, or monitor whether a patient has taken the meds, or any number of things.  What lies behind this is sensor technologies.

In an armed conflict context, however, it is questionable how many of the fighting forces in the world, state or non-state, will feel any great obligation to minimize collateral damage or attempt to more and more affirmatively id a target before striking.  If you don’t feel that obligation – I would estimate that the countries involved will be the US and Israel, and the rest of NATO only insofar as it ever intends to do any more fighting, but in any case, they will simply acquire US technology.  China will likely do so, because it would at least want every capability, and because it can most likely steal the technology and reverse engineer any missing parts.  But either sensor technology will spread across civilian uses, such as elder care robots, so as to make the concept of an ‘arms race’ moot, or else the number of countries that will be “racing” to have such technologies will be almost entirely limited to countries that (a) fight and (b) care about the rules.  That makes the list frankly pretty short.  It is possible that India might join that list, along with Taiwan, South Korea, and a handful of others in Asia.  But there will not be an “arms race” around sensors, because they are useful primarily for reasons related to more discriminating targeting, and the militaries in the world interested in that is not a long list.

Will there be an evolution of arms around UAVs, then?  Yes, but not likely along those parameters.  The likely arms race is along a quite different one.  Predators are slow and noisy for targeted killing; it will not take long before some party – Iran – begins doing what the US did via the CIA in Afghanistan against the Soviets, and supplies rudimentary surface to air missiles to attack the drones.  The arms race will get underway in the classic evolution of protecting air dominance.  The Predator, for example, might launch not a missile, but instead a still smaller drone with a single-person weapon, specifically designed for up close use.  That will be a function not of flying technology or weapons technology, however, but, once again, sensors.  But an arms race over air superiority is not one that has the implications for the supposed dangerous spread of this new military technology – introducing dangerous new dynamics between India and Pakistan, for example – that numbers of commentators seem (still) to imagine.

I am returning to the solitude and off-lineness of the mountains.

Ken’s Not-Yet-Response re Drone Warfare and Targeted Killing and Professor Alston’s Report

by Kenneth Anderson

I have been flattered to be called out on the topic of drones, targeted killing, the CIA, and related issues arising mostly from the release today of Professor Philip Alston’s UN special rapporteur report (press release here).  Deborah has a useful summary and some important quotes from the press release in her earlier post.  I’ve read the report once, and am reading it again, but am not ready to comment.  Well, not quite.  I’m under pressure to produce some commentary for some newspaper and print journalism, while getting the grading completed before my faculty’s $100 a day late fine kicks in … sorry to punt, but I’m not quite sure I want to weigh in with a quick blog post as yet on the topic (okay, this gets a little longer than planned, but it’s not really a response to the report).

I will say, though, that Philip’s careful discussion, set against the way in which the State Department frames the issues, is a demonstration once again of the ways in which public international law seems to be increasingly discourses passing in the night.  It’s one reason I hesitate to take the issue up here – I’m not persuaded that we all speak a sufficiently shared methodological language in these highly intertwined legal-political issues to be able to do much more than set out a view and the sources that we find persuasive.  The importance of actual historical state practice of leading states, or not, on the one hand, versus the importance of such things as pronouncements of the ICJ or other tribunals or statements by UN bodies or rapporteurs or military manuals of states that don’t actually fight, or not, on the other … you see the problem.

So, yes, I endorse the “independent” self-defense view as an alternative legal basis for the use of force, which is to say, I reject the view that uses of force are a binary exhausted by law enforcement and armed conflict (I’ve posted another round of this discussion and the CIA in the second hearing testimony that I’ve just posted at SSRN).  Given the existence of an armed conflict with Al Qaeda, among other parties at this point, whether any particular drone strike is an act within the armed conflict or an exercise of independent self-defense is open to interpretation, with the possibility of overlapping rationales in some cases.

I endorse the State Department’s view of this, as I understand it from Legal Adviser Koh’s ASIL speech, and think it nothing novel – merely the reassertion of US legal views – going well back before the Obama, Bush fils, and Clinton administrations, to Reagan and Bush pere, and no doubt well before that even.  If a state cannot or will not control its territory to prevent it from being used as safe haven for terrorists or terrorist groups, then even the important international legal rule of territorial sovereignty can be overcome by an affirmative defense of self-defense; that use of force might be in the form of armed conflict, but it might be something that does not rise to that level of hostilities and thus constitute an act of self-defense use of force simpliciter.  That use of force is justified under jus ad bellum and is directed against the threat – the terrorists – and because it is a use of force, it must meet standards that are, as the Legal Adviser said, the principles underlying armed conflict rules, distinction and proportionality and, I would add, necessity in the first place in determining to target.  Necessity giving rise to self-defense; distinction in defining the target; proportionality in the evaluation of collateral damage. (more…)

Drones and the CIA and Charlie Savage’s NYT Article

by Kenneth Anderson

Although I was up at six, I think Julian must get up a lot earlier than I do, as he is regularly beats me to the punch on what’s in the newspapers on drones.  I will post something more once Philip Alston’s report is out next Tuesday and I have had a chance to read the text.  But here are a couple of comments per Charlie Savage’s exceedingly interesting NYT piece.

There are two ways of seeing a call for drone strikes to be turned over to the US military, rather than the CIA.  One is fundamentally grounded in the binary that all uses of force must be either law enforcement or else armed conflict – and if so, there is no room for the CIA to be conducting these strikes.  In that case case, the call to take the CIA out of it is a way of reasserting the basic binary.  This is problematic from the US standpoint, if it is a way of reasserting this fundamental binary, since the Legal Adviser’s ASIL speech specifically preserves an independent ground of self-defense that is not a matter of armed conflict.  If CIA participation is unlawful because the binary holds, then the US has simply rejected the underlying premise – indeed, said that it has never accepted it, going back clear to the 1980s and beyond.

The other way to see a call to take the CIA out of the activity is on the ground that because this is an armed conflict, uses of force must be undertaken by lawful participants, and the CIA, as a civilian agency, is not a lawful participant.  Insofar as this is offered as something that is not driven by the fundamental binary above, then it is essentially a claim about the CIA not meeting the requirements lawfully to engage in hostilities – some version of the claim that the war with Al Qaeda is an armed conflict, and the CIA are not privileged combatants.  This is a technically more complicated claim in the rules of war than much of the public discussion has treated it.  Much of the public discussion seems to revolve around the idea that if you are a civilian, you are not allowed to take part in hostilities; the legal point, rather, is that there are numerous categories of civilians that have varying roles in direct participation in hostilities and the point is not to say that their participation is unlawful, it is that – if they were facing a lawful foe – they are themselves lawful targets.  Whether they wear uniforms or not is a question of whether the circumstances in which they wear uniforms, or non-standard uniforms (e.g., special forces in Afghanistan), etc., is a question of whether they fail to distinguish themselves from the non-combatants.  Insofar as they do this from Langely in some cubicle, that does not really present a problem.

As to the assertion that they have made themselves lawful targets – that would be true if engaged with a foe that could lawfully target anything.  In the case of a terrorist group – Al Qaeda, the IRA, ETA, etc., the automatic assumption that military lawyers sometimes make, that jus ad bellum and jus in bello are independent, is beside the point; these groups have no reciprocal right to target anything, irrespective of whether, in a lawful conflict, something or someone would be a target.  It is not the case that by flying a drone from Langley, the CIA operator is now a lawful target – he or she would be if flying it in a conflict with, oh, North Korea, but not Al Qaeda.  Al Qaeda has no belligerency rights jus ad bellum, just as it has no combatant privilege jus in bello.  To suggest that the CIA at Langley has put itself into an “equivalent” position is not correct.  If the CIA at Langley were fighting a lawful actor, its participants would be lawful targets – although not, merely in virtue of not wearing uniforms inside Langley, “unlawful combatants.” But not as regards Al Qaeda. (more…)

Predators over Pakistan …

by Kenneth Anderson

My new Weekly Standard essay – although “polemic” is probably closer to it.  And thanks, Julian, for the plug below! Well, regular readers have been hearing about this piece for a while, and I have posted various arguments from it (concerning targeted killing and Predator drones and the CIA and armed conflict and self-defense, and my general concern that the Obama administration has embraced a policy that its lawyers have not so far stood up publicly to defend as lawful against its gradually emerging critics in the international “soft law” community) here at Opinio Juris and at Volokh Conspiracy.  I will post a couple of comments on the piece later, including of couple of things I wish I had clarified or said differently.  Meanwhile, if you are interested, it is the cover in this week’s Weekly Standard (March 8, 2010).  It is also very, very long, at some 8,000 words — for which I am deeply grateful to the WS’s editors but you perhaps will not be — and so you might find it easier to read a pdf of the print edition at SSRN.

I have been meaning to add, though, that several positions are emerging in new scholarship coming out on this topic.  I’m not the only person defending “self defense” as the correct paradigm, for example.  Jordan Paust has an important new paper on this, and although we come to very different conclusions as to what and how self-defense does things for you, we share a foundation in international law of self-defense.  Mary Ellen O’Connell also has a well known position, ably set out in this book chapter, and which I criticize in passing in the WS.  John Radsan and Richard Murphy stake out an interesting position that calls for some form of judicial review of targeted killing, in this new Cardozo paper.  And, of course, the Ur-Text on the subject (even when I disagree with it!) Nils Melzer’s treatise, Targeted Killing in International Law (Oxford 2008), which I see is now out in paperback at $50 (but no Kindle edition).  I will come back in a separate post both to comment on some things from the WS essay at a less political level, and also to give a better sense of where my position sits in relation to others in the international law community.  Finally, I’d like to thank and congratulate the Harvard National Security Journal for its upcoming symposium on robotics, drones, and related topics this week – it promises to be very interesting, and I believe the journal might post some account of it or perhaps some video of the program.

The Physics of Battles in Space

by Kenneth Anderson

I do realize that Copenhagen is still underway, so this is a little like whispering in church (I’ll put it mostly under the fold) … however, it’s a Friday afternoon, and this Gizmodo article on the physics of combat in space was highly distracting.  The most interesting bit to me was the observation that in a war between planets, functional trajectories of approach are not unlimited.  Launch windows and orbital relations between the planets matter hugely.  There are logical places for defense, in other words, even if they shift over time with the planet’s solar orbits, beyond the planetary defensive orbit itself.  This means room for strategy in space combat, in other words, and not merely tactics in skirmishing among ships.