Killer Robots, Illusionists in Forming the Future ICL

Killer Robots, Illusionists in Forming the Future ICL

[Lily Zanjani is a paralegal at the International Criminal Court and rule of law intern and project assistant at the International Centre for Counterterrorism. She holds an LL.M. from Tilburg University and is currently completing an Advanced LL.M. of Public International Law at Leiden University. The views expressed in this article are the author’s alone.]

The susceptibility of international law with relation to the evolution of Autonomous Weapons Systems (AWS) is a contemporary challenge. Due to the high rise of AWS employment by the US, the UK, China, Israel, South Korea and Russia, questions concerning the legal accountability for the crimes committed by these systems have arisen. This article will address whether individuals can criminally be held responsible under international law for the misconduct of AWS pursuant to the Rome Statute. Hence, revealing whether the current international law is adequate to address the paradigm shifts and modern challenges encountered by the emergence of AWS.

In general, Autonomous Weapons System have been defined as:

“Any weapon system with autonomy in its critical functions – that is, a weapon system that can select, search for, detect, identify, track and attack, use force against, neutralize, damage or destroy targets without human intervention.”

Sharkey suggests 5 categories to delineate the autonomy of such weapons:

“1. human engages with and selects targets and initiates any attack; 2. Program suggests alternative targets and human chooses which to attack; 3. Program selects target and human must approve before attack; 4. Program selects target and human has restricted time to vet; and 5. Program selects target and initiates attack without human involvement”.

This category is useful to determine the criminal liability when utilizing these weapons. Hence, full AWS under international law refers to level 5 in Sharkey’s spectrum. This means that apart from deploying the weapon, the weapon operates entirely on its own.

Rome Statute and the Discourse of AWS’ Criminal Liability

Article 8, Rome Statute includes individual criminal liability for the war crimes of: 1. Wrongfully causing great suffering during an international armed conflict, 2. Disproportionate attacks during an international armed conflict.

According to article 28 of the Rome Statute, a military commander is criminally responsible for crimes committed by forces under his or her effective control. In order to prove the commander’s guilt, it must be showed that he or she knew or should have known that a crime was about to be committed and failed to take all necessary and reasonable precautions to prevent, repress and punish it.

This raises at least four legal challenges with respect to killer robots;

Establishing the effective control over killer robots; 2. Providing the mental element of known or should have known for a war crime committed by a killer robot; 3. Commander’s responsibility to prevent, repress and punish a robot; 4. Tracing back an AWS’ misconduct to the commander.

Accordingly, if a fully autonomous killer robot selects and attacks the target without any human intervention, can it nevertheless be concluded that a human is in some way criminally responsible and should therefore go to jail?

On the other hand, some assessments concern the hurdle of “retributive justice to victims” even in cases where addressing the obstacles confronting the accountability issues have succeeded. Keeping a criminal accountable serves the retributive function of criminal law. It provides victims with the satisfaction of knowing that a responsible party was condemned and punished for the harm they endured, as well as aiding in the avoidance of collective responsibility and the promotion of reconciliation.

Moreover, applying criminal responsibility and accountability, punishing past misconducts seeks to deter future ones by punishing perpetrators and raising awareness of ramifications. Thus, the responsibility to accomplish “deterrence and retribution” cannot be transferred from robots to humans as it would not serve the purpose and objective of the criminal justice. Killer robots cannot learn from an unjust crime they have committed. Therefore, by prosecuting an object that does not possess any mental capacity, objective of justice will not be fully served.

Another existing legal gaps is that “neither criminal law nor civil law” provides any sources for an efficient procedure to address the liability and accountability issues of robots. This goes beyond partial use of AWS by human (Sharkey’s spectrum 1, 2, and 3). It expands to accountability issues for the use of fully killer robots.

The legal vacuum of having a proper mechanism to assess criminal accountability of killer robots concern both direct and indirect responsibility under International Criminal Law.

One of the reasons to claim current international criminal law’s inadequacy to cover the legal accountability questions of AWS, is the lack of intent or the mental requirement to execute wrongful acts in relation to AWS due to their non-existence mental state.

Accordingly, it is hard to prove the intention of the commander behind the AWS to commit certain criminal acts. What if an AWS commits a criminal act as a result of systematic error which was not in the intention of the commander who took all the precautions into account? Therefore, direct command responsibility over the misconducts of fully AWS (Sharkey’s 5th criteria) cannot be attributed to a commander or an operator unless the criterion of intent is meant.

Another issue might relate to the manufacturer or programmer’s lack of understanding of the circumstances as it is required or expected from a commander. This raises issues regarding civil accountability. The civil accountability may concern the civil accountability and corporate responsibility of manufacturer.

Concerning the crimes an AWS can commit, Rome Statute defines 2 elements necessary to be present in an act to be considered a crime; actus reus and mens rea. Since AWS are capable of committing criminal acts, actus reus element can be met. However, the mens rea element is hard to be proved when an AWS is in charge of committing the crime. How could you prove the existence of the intent element and mental state through the minds of an AWS? Alternatively, how can mens rea element be met in relation to the commander in charge of the AWS? For criminal liability to exist, “moral agency [and intention should] accompany the commission of criminal acts”.

Direct Responsibility

Another issue arising from accountability is the problem of direct liability. In order to have direct responsibility, the direct perpetrator or other individuals involved in the commission or occurrence of the crime can be held accountable. This can be found in ICTY Statute where the “person who planned, instigated, ordered, committed, or otherwise aided and abetted in the planning, preparation and execution of a crime” is considered to be directly liable for the commission of a crime.

As emphasized above, article 28 of the Rome Statute also refers to “individuals” and “natural persons”. This means that the subject of assessment under these provisions are natural persons rather than robots (i.e. it has not been established yet that actors such as robots can be subjects under international law.).

As mentioned above, the lack of mens rea element is one of the reasons. The other reason concerns the lack of criminal jurisdictions to consider any other subjects before the court other than natural persons as these are considered to be the only subject capable of “intentionally committing crimes”. Thirdly, as it is the purpose of criminal justice, AWS would not capitalize from any punishments. Thus, since AWS “lack capacity for moral autonomy [hence the] responsibility”. Therefore, a human can be held directly accountable for a criminal conduct of an AWS if they dispensed the robot to operate with the intention of committing that specific crime.

Moreover, in this scenario there is not only the commander and the subordinate who is involved. The operator and program play roles too. Therefore, it would be hard to determine whose direct responsibility was effective in the criminal act of the robot.

Indirect Responsibility

The second issue concerns the indirect or command liability of AWS’ criminal act. This reflects upon the failure of the commander’s responsibility for preventing or punishing a subordinate’s act. This failure of taking the necessary means makes sense for a superior who has effective control and is aware of the conduct being a criminal act. This is indeed what is referred to as the crime of omission. The two elements that are required to establish an indirect responsibility is knowledge and effective control.

How to establish a commander’s effective control and knowledge of a robot acting a crime? Since robots do not have the mental capacity and subordinates can only be charged with a criminal offense once the mens rea requirement is fulfilled it would be hard to establish effective control over killer robots. This therefore requires the subordinate to fully commit the crime without leaving it inchoate.

The second issue arising from the use of AWS in relation to command responsibility is the ambiguity and the lack of sufficient constructive knowledge of the commander for the omission of the crime. This information can be critical to establish the mens rea of indirect responsibility since the possibility of prevention, intervention and punishment by the commander over its subordinate would have been possible. This includes the information of past offense. It would be difficult in case an AWS meeting Sharkey’s 5th criterion of autonomy committed a crime because then its commander would most likely not be sufficiently alarmed of the crime’s occurrence. Hence, it cannot be expected that a commander would be sufficiently informed by the risks of a subordinate’s commitment. Moreover, commanders’ liability over such a conduct cannot be said to be valid when they have not been alarmed of the existence of potential risks at the first place. This is to presume that the AWS is operating in full autonomy without any human in the loop in accordance with Sharkey’s 5th criterion of autonomy. The next question arises from this is whether the existence of such constructive knowledge can be generalizable to all the functioning robots and AWS? The conclusion is that due to the uncertainty that can be attributed to these robots, the knowledge gained from their past activities cannot be reliable nor generalizable. Thus, command responsibility cannot be applicable to such scenarios. Moreover, since there is a range of different autonomies (as mentioned under operationalization section), generalization of any knowledge would not be wise.

Lastly, as mentioned above, effective control is a requirement for command responsibility to be established as effective control is the “material ability to control the actions of subordinates is the touchstone of individual responsibility”. The “material ability to prevent or punish criminal conduct” is another requirement to establish the command responsibility. This is hard to prove in the case of AWS due to the uncertainties associated with them which makes it hard for the commander to be aware of the risks and calculations of the AWS in order to prevent or punish the subordinate (the robot). Therefore, “the possession of de jure power in itself may not suffice for the finding of command responsibility if it does not manifest in effective control”.


Conclusively, there is a need for a more controlling and constructive normative model to regulate AWS conducts. It is important to get the conversation about this subject started as soon as possible. Throughout this article, it was demonstrated how the current international law is lacking to address the legal challenges arising from the development of AWS. One solution was suggesting that the accountability gap can be filled by taking the civil accountability into account. Meaning that the manufacturer has to be held accountable for the misconducts of AWS and their due diligence to obey their corporate social responsibility. It was therefore illustrated how there exists a distributed accountability of States and other actors involved in the discourse of AWS’s criminal act. Moreover, the reasons for why individual accountability cannot be the just instrument for bringing the conducts of AWS to justice was scrutinized elaboratively. The overall conclusion was the absence of mens rea element in the commission of crimes occurred by AWS. Lastly, future research can assess the objectives and questions of this paper in a more elaborate and advanced discourse by addressing the civil accountability and social responsibility of the manufacturer aspect of the problem.

Print Friendly, PDF & Email
Artificial Intelligence, Autonomous Weapons, General, Public International Law, Technology

Leave a Reply

Please Login to comment
Notify of