27 Apr Symposium on Legal, Operational, and Ethical Questions on the Use of AI and Machine Learning in Armed Conflict
Artificial intelligence (AI) and machine learning tools are already in use to help identify targets on the battlefield and they might soon power new types of cyber and autonomous weapons. These technologies could have profound implications for the role of humans in armed conflict and there will be important choices ahead. Among the most pressing – for compliance with international humanitarian law (IHL) and to retain a measure of humanity in warfare – will be to ensure human control and judgement in AI-enabled tasks and decisions that pose risks to human life, liberty, and dignity.
The International Committee of the Red Cross (ICRC) set out its perspective on AI and machine learning in armed conflict in a June 2019 position paper, a shorter version of which appeared in the latest ICRC Report on International Humanitarian Law and the Challenges of Contemporary Armed Conflict. In this blog symposium, several experts use the ICRC’s position as a starting point for a conversation on AI and machine learning in armed conflict.
Here is a running list of posts in this symposium:
Nadia Marsan, Confronting Complexity through Collective Defence