Symposium on Legal, Operational, and Ethical Questions on the Use of AI and Machine Learning in Armed Conflict

Symposium on Legal, Operational, and Ethical Questions on the Use of AI and Machine Learning in Armed Conflict

Artificial intelligence (AI) and machine learning tools are already in use to help identify targets on the battlefield and they might soon power new types of cyber and autonomous weapons. These technologies could have profound implications for the role of humans in armed conflict and there will be important choices ahead. Among the most pressing – for compliance with international humanitarian law (IHL) and to retain a measure of humanity in warfare – will be to ensure human control and judgement in AI-enabled tasks and decisions that pose risks to human life, liberty, and dignity.

The International Committee of the Red Cross (ICRC) set out its perspective on AI and machine learning in armed conflict in a June 2019 position paper, a shorter version of which appeared in the latest ICRC Report on International Humanitarian Law and the Challenges of Contemporary Armed Conflict. In this blog symposium, several experts use the ICRC’s position as a starting point for a conversation on AI and machine learning in armed conflict.

Here is a running list of posts in this symposium:

ICRC, Artificial Intelligence and Machine Learning in Armed Conflict

Dustin A. Lewis, Why Detention, Humanitarian Services, Maritime Systems and Legal Advice Merit Greater Attention

Nadia Marsan, Confronting Complexity through Collective Defence

Elke Schwarz, Humanity-Centric AI for Armed Conflict: A Contradiction in Terms?

Print Friendly, PDF & Email
Topics
Featured, General, International Criminal Law, International Human Rights Law, International Humanitarian Law, Organizations, Symposia, Themes, Use of Force
No Comments

Sorry, the comment form is closed at this time.