So How Do We Assess Proportionality? (A Response to Blank, Corn, and Jensen) (UPDATED)

by Kevin Jon Heller

Just Security published a post by Laurie Blank, Geoffrey Corn, and Eric Jensen yesterday criticizing two surveys that are interested in how laypeople think about IHL’s principle of proportionality. Much of what the authors say is absolutely correct, particularly about the need to recognize that assessing ex post the ex ante decision-making process of military commanders is fraught with difficulty and likely to both overemphasize actual civilian casualties and underemphasize anticipated military advantage. But the post is still problematic, particularly the following claims:

Second, the surveys exacerbate what is perhaps the most dangerous misperception and distortion of this vital regulatory principle: that you, or I, or anyone can accurately and meaningfully assess the proportionality of an attack after the fact and without full knowledge of the circumstances at the time of the attack. Proportionality necessitates a prospective analysis that cannot be assessed in hindsight by looking solely at the effects of an attack (or the hypothetical effects of a hypothetical attack). The language of the proportionality rule refers to “expected” civilian casualties and “anticipated” military advantage — the very choice of words shows that the analysis must be taken in a prospective manner from the viewpoint of the commander at the time of the attack. Credible compliance assessment therefore requires considering the situation through the lens of the decision-making commander, and then asking whether the attack judgment was reasonable under the circumstances.

[snip]

Ultimately, these surveys are based on a flawed assumption: that “public perception” is the ultimate touchstone for compliance with the proportionality rule; a touchstone that should be substituted for the expert, hard-earned judgment of military commanders who bear the moral, strategic, tactical and legal consequences of each and every decision they make in combat. On that basis alone, it is the surveys that are disproportionate.

I can’t speak to one of the surveys, because the authors don’t provide any information about it. But I am aware of (and have completed) the survey they do link to, which is conducted by Janina Dill, an excellent young Oxford lecturer who is the Associate Director of the Oxford Institute for Ethics, Law and Armed Conflict. The authors caricature Dill’s survey when they claim that it is based on the “flawed assumption” that “public perception” is “the ultimate touchstone for compliance with the proportionality rule.” Dill does not suggest that the legality of a particular attack should be determined by public perception of whether it was proportionate; she is simply interested in how non-military people think about proportionality. Like the authors, I don’t believe Dill’s questions capture the complexity of the military commander’s task. But neither does Dill. That is not the point of the survey.

Dill, however, is more than capable of defending herself. I am more interested in the first paragraph quoted above, because the authors come perilously close therein to claiming that it is per se illegitimate for anyone — or at least individuals who are not soldiers themselves — to second-guess the targeting decisions of military commanders. I suppose they leave themselves a tiny escape from that position by implying (obliquely) that “you, or I, or anyone” could assess ex post a military commander’s ex ante proportionality calculation as long as we had “full knowledge of the circumstances at the time of the attack.” But the authors make no attempt whatsoever to explain how the decision-makers involved in any ex post “compliance assessment” could ever take into account everything the military commander knew about the circumstances of the attack — from “the enemy’s center of gravity and the relationship of the nominated target to that consideration” to “the exigencies of the tactical situation” to “the weaponeering process, including the choice of weapons to deploy and their known or anticipated blast radius or other consequences.” Some information about the objective circumstances of the attack may be available in written reports and through the testimony of the military commander’s superiors and subordinates. But those objective circumstances are only part of the story, because IHL proportionality requires (as the authors rightly note) assessing the reasonableness of the attack “through the lens” of the commander herself — what she actually knew about the objective circumstances of the attack. And that information will be located solely in the mind of the military commander. Perhaps some commanders are so honest and so mentally disciplined that they will provide a court-martial or international tribunal with an accurate assessment of what went through their mind before the attack. But most commanders faced with discipline or prosecution for a possibly disproportionate attack will either lie about their proportionality calculation or unconsciously rewrite that calculation after the fact to justify killing innocent civilians.

In most cases, therefore, the decision-makers involved in a compliance assessment will have no choice but to rely on circumstantial evidence — including, yes, an attack’s actual consequences — to infer what went through the mind of a military commander prior to launching an attack. Such inferences will always be, for all the reasons the authors note, complex, fraught with difficulty, and prone to error. But unless we are going to simply defer to “the expert, hard-earned judgment of military commanders who bear the moral, strategic, tactical and legal consequences of each and every decision they make in combat,” we have no choice but to ask people to draw them. I doubt that any of the authors think that uncritical deference is appropriate; more likely, they think that although compliance assessment is necessary, no civilian should ever be permitted to sit in judgment of a soldier. If so — or if they think that civilian assessment is possible in the right system — the authors need to do more than just complain about how difficult it is to be a military commander and dismiss as irrelevant how civilians think about fundamental principles of IHL. They need to tell us what a properly-designed system of compliance assessment would look like.

UPDATE: Janina Dill has posted her own response at Just Security. It’s excellent; interested readers should definitely check it out.

http://opiniojuris.org/2015/03/26/so-how-do-we-assess-proportionality-a-response-to-blank-corn-and-jensen/

18 Responses

  1. Great post Kevin. Like you, I think the authors completely miss the point of Janina’s survey. It is a standard technique in ethics/moral philosophy to expose people to rather stark and unrealistic hypos in order to make them explain their moral intuitions, and then attempt to rationalize that intuitive judgment in some kind of wider moral framework (e.g. utilitarianism). This is essentially what the poll does, and it doesn’t purport to do anything else. Another interesting thing about the poll (the results of which I look forward to seeing) is how it tries to get at the ad bellum/in bello divide by having the respondents go through the same exercise from a Hamas and from an Israeli standpoint, the point of which (I think) is to establish whether the respondents’ moral intuitions in making the proportionality assessment depends on what they think of the justness of the overall cause of the attacker.

  2. I don’t think that their emphasis on facts known at the time is improper but, yes, that must usually involve consideration of the event afterwards, and circ. evid. Geoffrey has written about this type of focus before and has testified before the ICTY regarding artillery firepower.
    What I find disturbing is the reference merely to “military advantage” — sounds like the old, denounced kreigsraison theory as opposed to the traditional “military necessity” test. GPI attempts to change that to a “concrete and direct” military advantage test — and note how they left out the two qualifiers in GPI language

  3. I started to answer Prof. Dill’s survey questions, but I stopped because I don’t think that the questions give sufficient background information to give a responsible answer. Also, I don’t agree with her description of the law of proportionality in the preface to her survey.

  4. Moreover, from merely a moral perspective I believe that their “multi-faceted” set of criteria are useful. In “Operationalizing Self-Defense,” the following is offered (see http://ssrn.com/abstract=2459649 ):
    As noted in another writing with respect to nuanced and contextually attentive application of the principles of reasonable necessity, proportionality, and distinction during use of drones for self-defense targetings, one should consider all relevant features of context, including
    [1] identification of the target (e.g., as a DPAA, combatant, fighter with a continuous combat function, or DPH as opposed to a non-targetable civilian); [2] the importance of the target; [3] whether equally effective alternative methods of targeting or capture exist; [4] the presence, proximity, and number of civilians who are not targetable; [5] whether some civilians are voluntary or coerced human shields; [6] the precision in targeting that can obtain; and [7] foreseeable consequences with respect to civilian death, injury, or suffering.
    A 2007 U.S. Joint Chiefs of Staff publication on targeting had offered a six-step decisional and review process in general language:
    (1) identification of the military objective or an operation, (2) target development and prioritization, (3) capabilities analysis, (4) commander’s decision and force assignment, (5) mission planning and force execution, and (6) assessment.

  5. Thanks for an interesting post . The whole issue is very complicated , and yet :

    1) Many scholars in that field, try to draw an analogy, between criminal law, and war or international law , in that aspect of: justification for action taken . it is a mistake of course !! in criminal law , one don’t get any justification , if he has stressed himself , in advance , knowingly , into criminal behavior . In such, stress in the battle field, risking life and so forth…. Can’t grant a commander , any justification after the fact , from criminal point of view . Yet :

    2) It is not fair or just to some extent at least one may argue , since , every commander is also a soldier , who needs to obey orders , whether – those of his superior , or above all – Political authority , In such :

    3) We need to think , whether , at least , some liability , should be shifted , also to politician . If so , they would think twice before any military action , and we shall have less victims by all means . It’s their call finally , for which, commanders, at the battlefield, may pay heavily. And indeed :

    4) The aggregate damage, is not considered actually at war (generally speaking) but, the decision making, concrete one, of commanders in the very heart of a specific engagement. If politicians would become also liable for commanders decisions, it would reduce the whole aggregate damage of war, inflicting civilian sometimes, so heavily.

    Thanks

  6. Courts grapple all the time with trying to work out what an accused knew and was thinking based on imperfect evidence. What is important, however, is the court is nonetheless endeavouring to work out what the accused knew and was thinking — in other words, it is applying the correct test.

    It is also work noting the IHL / ICL distinction re proportionality. For State responsibility under IHL, the test is ‘excessive’. For individual responsibility under ICL (at least under the Rome Statute), the test is ‘clearly excessive’. This certainly assists in drawing inferences about an accused’s state of mind. It certain cases it also will mean that it will be in the accused’s own interests to put certain facts and assumptions into evidence.

    At least some militaries have detailed processes for conducting and recording targeting decisions, addressing both the decision on whether the intended object of attack is lawful target and the ‘inputs’ into the proportionality decision. It would be an interesting research question to look at whether a State or commander would (or should) be liable for not using reasonably available processes and tools when making targeting decisions.

  7. Ian Henderson ,

    Great deal of your question and wonders , depend upon technology : more accurate intelligence and on line one , can definitely make difference . Can one state , or commander , be blamed for lacking sufficient technology ( in light of standarts , set up by Israel and the US for exe ) this is a hell of one . thanks

  8. Got to agree with Professors Blank, Corn and Jensen. I appreciate that such surveys are designed only to capture public perceptions. The survey I have seen, appears to perpetuate rather incorrect perceptions of very complex issues. To a lawyer the description of the proportionality balance is simply wrong; to military operator the scenarios may well appear oversimplified to the point of becoming unrealistic. This makes not only the questions but potential answers too somehow irresponsible, as noted by another commentator. Finally, it does not assist in any way in dissemination of fundamentals of IHL to wider public, even if as only a by-product of the academic work.

  9. el roam,

    Technology helps, but a lot of it can be done on a piece of paper. What do you currently know. What do you infer from what you know. What did you try to find out but was unable. What factors did you then take into account.

    A forward observer with binoculars and a radio is a great intelligence asset. The law does not specify any more than that. Even the binoculars and radio are optional. This has to be the case, as not only do some militaries not have access to GPS, powerful software tools etc, but those militaries that do still need to be able to fight when the enemy turns off the electricity. And ultimately I think that is part of what Blank, Corn and Jensen were saying.

  10. Ian Henderson ,

    One should consider , the modern fight against terror , within urban areas , and among civilians as major issue , in modern war , and international law handling it .

    In such , technology is far greater tool than just : helping one , as you have noticed . Differentiating between civil targets, military ones, becomes very very elusive , and finally, can define or make difference between killing innocent civilians, and military personnel.

    Terrorists, improvise their moves , constantly grouping and re-grouping, all in urban areas, So , you don’t fight against specific and well organized military unit, observed clearly as such. So, the online discretion is very critical.

    Thanks

  11. el roam,

    I agree, although I think we are now talking about target distinction. For the proportionality analysis, I suggest the starting point at that stage of the targeting decision is you presume the target is valid and you are trying to work out the risk of injury to civilians and civilian objects if you attacked that target.

  12. Here is my response to Blank et al. on Just Security, in case you are interested. Good discussion!

    http://justsecurity.org/21529/meaning-proportionate-collateral-damage-care-civilians/

  13. By good discussion I meant on Kevin’s post!

  14. Janina: using a phrase like “the military advantage” is more like kriegsraison than GPI, art. 51(5)(b) “concrete and direct military advantage” — which does not even reflect the traditional “military necessity” test.

  15. The difficulty is that the survey questions are very much analogous to asking the following: “Dave killed John. Is he guilty of murder?”

    The answer depends on external information simply not provided in the hypothetical, and not including “I don’t know, I need more information” as an answer choice – thereby forcing respondents to say “yes” or “no” – is a significant methodological flaw in the survey (from a survey integrity perspective, not merely a legal perspective).

    For instance, absent an “I don’t know” option, one might choose to assume that there were no other relevant factors, and simply answer “no, it’s disproportionate”. Or, one might assume that the question was “can such an attack be proportionate under any circumstances” and therefore respond “yes, it is proportionate”. Because of the absence of an “I don’t know/need more information” option, it is impossible to tell which respondents, if any, included such idiosyncratic readings (or other readings) when faced with the forced choice among the options.

    Here’s how one scholarly article on survey research put it:

    “A forced-choice rating scale will bias results by eliminating the undecideds and/or those with no opinion. Some researchers will purposely leave out the response choice of “undecided,” “no opinion,” “uncertain,” or “don’t know.” This approach may be reasonable when the researcher has good reason to believe that virtually all subjects have an opinion and you do not want them to “cop out” by indicating they are uncertain. What happens if many subjects are indeed undecided and we do not allow them the option of no opinion? Most will probably select a rating from the middle of the scale, e.g., “average” or “fair.” This will cause two biases: (a) it will appear that more subjects have opinions than actually do (b) the mean and median will be shifted toward the middle of the scale. (The “undecided” category is not part of the scale.)

    Researchers have found that the public will express their opinions even on “fictitious” issues. Respondents have expressed opinions regarding nonexistent issues such as the Metallic Metals Act (Payne 1951, p.18). Hawkins and Coney (1981) found that respondents would indicate their opinions on various phony topics such as the National Bureau of Consumer Complaints, the proposed Religion Verification Act, etc. They also found that the number of responses to a fictitious issue was affected by the presence of a “don’t know” response category. Providing a “don’t know” choice significantly reduced the number of meaningless responses.”

    Friedman, H.H. & Amoo, T. (Winter, 1999) Rating the Rating Scales, Journal of Marketing Management, Vol. 9:3, 114-123. Retrieved from http://academic.brooklyn.cuny.edu/economic/friedman/rateratingscales.htm

  16. In other words, Dill’s methodological approach forced an appearance of certainty on her results that simply may not exist in the real world. Indeed, it would be far more meaningful to say “60% believe X is disproportionate, 27% believe X is proportionate, and 13% don’t know or need more information before deciding whether X is disproportionate” (to choose hypothetical results) than to say “72% believe X is disproportionate, while 28% believe X is proportionate” without any indication of how many of the people in each category may have, in reality, been wholly uncertain and simply chosen an option at random because the survey forced them to do so.

    (As an alternative, a “how strongly do you believe that” rating scale might have been useful to resolve that flaw)

  17. Hi Akiva

    thanks for that thoughtful feedback. I hope that for an online survey the pressure to complete it is so minimal that people who simply don’t know or need more information feel free not to finish the survey. They also have the opportunity to say that they felt they didn’t actually know how to answer in the comment section of the survey. I understand and fully respect that not everyone has clear views on the matter. It would have been independently interesting to see how many people don’t know what to think or would require more information in order to come to a conclusion, I agree, but that is not the point of the survey. I have written about why I do not provide more information in the scenarios on ‘just security’. Thanks again!

  18. Janina,

    My pleasure. As I said, I think the general goal you’re aiming at is a significant one, so I’d love to see results that I’d consider meaningful. I agree that you can’t give too much information in a survey like this; I just wish you’d included some means of dealing with (what appears to me to be) the flaw of the forced response format – whether that was a “I don’t know/need more information” option, a “on a scale of 1-10, how strongly do you feel that” follow up after each question, or even an open-response “why” after each question.

    But it’s your research, not mine, after all!

Trackbacks and Pingbacks

  1. There are no trackbacks or pingbacks associated with this post at this time.