The Ethically Ideal Autonomous Battlefield Robot as Ethically Ideal Human Soldier? And What Is the Moral Worth of a Human Soldier’s Life?

The Ethically Ideal Autonomous Battlefield Robot as Ethically Ideal Human Soldier? And What Is the Moral Worth of a Human Soldier’s Life?

1

In my last post about battlefield robots, I quickly breezed through the ethical and legal priors that technology would go through before reaching the fundamental issues of autonomous battlefield robots – autonomy in decisionmaking in the use of weapons on the battlefield. Leaving aside the questions of exactly how that can be achieved as a matter of actual program (although, in fact, the ‘how’ is a primary question for me – too often, law professors and philosophers wave their hands at the practicalities, whereas the actual issues of translation to machine instructions are no less important than the ‘fundamental’ questions of concept), what would one want to build into an ethical autonomous battlefield robot?

One point raised in the comments that I should have addressed among the logical priors: Haven’t we already crossed the autonomy line the moment we started using weapons that we launched, but could not control on their way to the target? This is theoretically true of a bullet or an arrow, but it is more than merely theoretically true of a dumb bomb dropped from thousands of feet or, even more, the “fire and forget” technology that took off in the Vietnam era. This is true, and represents another intermediate step on the way to complete autonomy. But there is still a difference: lack of control over a weapon once launched removes the possibility of aim from that point forward, but it is still different in principle from the discretion to launch the weapon in the first place. “Autonomy,” in the sense meant for autonomous battlefield robots, conveys discretion subject to prior programming to launch or not launch a weapon.

2

One way to answer to this question is to say, well, an ethical autonomous battlefield robot would behave in the way that an ethically ideal human soldier would. That draws us conceptually back into the fundamental ethical and legal rules that make up the jus in bello: the principle of distinction and targeting only legitimate military targets, first; and proportionality as to military advantage and collateral damage, second. The first is categorical, meaning that one may never directly something or someone that is not a legitimate military objective. The second is a weighing up of two quite different – indeed incommensurable – values, military necessity and damage to civilians and civilian objects. (I am skipping over the technical language of IHL, and speaking in a general sense of jus in bello ethics.)

Put another way, we presumably would want these two ethical functions at a minimum in any robot soldier, just as we want them in any human soldier. These features of the ideal human soldier, but we might add layers of additional ethical architecture to a robot’s programming, as a safety margin. So, for example:

°Well short of allowing a machine genuine autonomy, a robot might have an advisory role only, with the decision to use a weapon still human controlled in real time; in this scenario, a robot might function more akin to medical diagnostic computers. On the other hand, human dependence on an “advisory” computer integrating many different input streams to identify targets on the tactical, infantry battlefield, in house to house counterinsurgency, might turn out to be total, and the human “control” more theoretical than real.

°A robot might be limited to targeting other machines only, not human beings, as some theorists have suggested. Human soldiers, of course, are permitted to target combatants directly; a machine, as a margin of ethical safety, might be limited to targeting machines only. In reality, this would provide only a limited margin, since it is rare for military machines in use not to be manned, and of course targeting a machine does not deal with noncombatants in the vicinity. But it simplifies the affirmative targeting.

Very well. I don’t mean to skip over the enormous issues in translation of these two ethical principles into something that a machine could carry out. But I want to raise something even more fundamental than those translation problems: Is the ideal autonomous battlefield robot one that makes decisions as the ideal ethical soldier would? Is that the right model in the first place?

This question arises from a simple point – a robot is a machine, and does not have the moral worth of a human being, including a human soldier or a civilian, at least not unless and until we finally move into Asimov-territory. Should a robot attach any value to itself, to its own self preservation, at the cost of civilian collateral damage? How much, and does that differ from the value that a human soldier has?

In asking this question, the question of autonomous battlefield robots – ethics for robot soldiers – winds back upon itself and becomes a thought experiment for how we value human soldiers. Human soldiers have value as military materiel; the existence, unity, and cohesion of soldiers in a unit, whether a tactical unit or a whole army, is part of the calculation of military necessity. So, for that matter, does military equipment – indeed, armies are frequently willing to lose many soldiers to protect vital military equipment. So in that sense, both men and machines have value – but as military materiel, in the calculation of military advantage.

3

What the robot question poses by implication, however, is what, if any, is the value of either robots or human soldiers set against the lives of civilians. To the extent that soldiers and military machines are part of the consideration of military advantage, yes. But what about circumstances in which they do not contribute to military advantage, meaning that they confer no greater military advantage on the margin – for example, under conditions in which a side in a conflict already has overwhelming advantage on its side? This is arguably the situation of the Kosovo conflict, and the question that arises is whether the lives of soldiers under conditions in which their side has overwhelming advantage can ever be weighed, on some basis of value other than military advantage, against the lives of civilians. Value, that is, simply as human beings, not as materiel.

On the one hand, as I recall (but don’t have the language in front of me), the Yugoslavia tribunal prosecutor’s office acknowledged in its Kosovo report that soldiers’ lives have some value independent of being military materiel. (I’ll go back and get the exact language, and I don’t recall the hedges and limitations on it; sorry I don’t have it here.) On the other hand, I had a brief conversation with Michael Walzer after a panel discussion on the impending Iraq war a few years ago and raised this issue. Without wanting to assign him a position on merely the basis of a chat after a panel presentation, he said flatly that he did not see how the combatant/noncombatant distinction could survive if it were possible that combatants were ever weighed simply as lives against noncombatants; the whole point is to separate out the one from the other. And the language of Protocol I – the formulation of military advantage versus collateral damage – while problematic in some important respects, does not admit of any consideration on the combatant side other than military advantage.

Of course we know what has traditionally taken place – military commanders wrap the value-as-human beings of their soldiers into the language of military advantage and, given the uncertainties of war, it makes sense. But conceptually, and sometimes in fact as in Kosovo, that is not always the case. it is likely to be a question that arises with greater frequency as military forces are sent to intervene in humanitarian missions, in which political support for the mission at home is dependent on low or no casualties.

So: framed as a question of whether an ideal robot would put any value on itself as more than materiel, insofar as it seeks to model itself on the ethically ideal soldier – well, the question automatically becomes, would the ethically ideal human soldier put any value on himself or herself as more than materiel, at least when in competition with the lives of civilians. And if so, on what moral basis, and what happens, as a result, to the line between combatant and noncombatant?

4

Thus one important note about ‘ethics of robot soldiers’ is that it can produce important discussions not only on its own terms about machines, but also about the ethics of human soldiers, in the real world of today.

It is worth noting, too, that the question of the value of soldiers as lives, rather than as materiel, is starting to enter the vocabulary, in the form of ‘force protection’, offered under such rubrics as ‘soldiers have human rights too’ or ‘soldiers have a right to life too’. Anecdotally, I have started to see this in some statements by commanders in Iraq. It is unsurprising that this kind of moral language – addressing soldiers as human beings rather than as combatants or, more precisely, both – enters the moral language of soldiers under conditions in which so many of the risks to soldiers come from violations of the laws of war by the other side. And, again without having it in front of me, I seem to recall that the new US military counterinsurgency manual does indeed make reference to the rights to life of soldiers.

(Parts of this discussion are drawn from a presentation I made in May 2008 to a meeting of the Hoover Task Force on National Security and Law; it is still so early a draft that I haven’t posted it up to SSRN, but perhaps I should. It differs, as you can see, from other discussions of robot ethics in that it deliberately circles around to use the discussion of robot ethics, in at least one important instance, as a way of discussing the ethics of human soldiers.)

Print Friendly, PDF & Email
Topics
General
Notify of
Diodotus

Fascinating questions about whether combatants have value as humans. In general I side with Walzer’s view on this one, though perhaps to be morally consistent one would then have to outlaw conscription. Assuming the role of combatant makes one a military objective; and the warrior’s honor requires one to place oneself in the way of innocents of either side.

But your post doesn’t seem to distinguish between normative and descriptive ethics. On the one hand, I think the position above is most consistent with the ethical basis for the laws of war. On the other hand, as a political scientist, I am certain this is not how the rules are actually applied and understood. (The targeting policy in Kosovo you mention is a perfect example.) In theory, one should privilege enemy civilians over one’s own combatants, but in fact military planners and whole societies privilege enemy civilians over enemy combatants, but not over their own soldiers.