Workshop 7: Death by Algorithm—The Frontier of Algorithmic Killing

Organizers

Sune With Tryk Christensen, Aarhus University, Denmark

Sune With Tryk Christensen is a Ph.D.-fellow at the Department of Philosophy and the History of Ideas at Aarhus University, Denmark. His PhD project, entitled ”Death by Algorithm”, investigates the changing perspective in the conceptualization of war caused by autonomous weapons systems, their effects on the moral character of soldiers and our perception of the enemy.

Morten Dige, Aarhus University, Denmark

Morten Dige works in the fields of bioethics, professional ethics, research ethics, and the ethics of war. He has published articles on drone killing, robotic warfare, torture, the principle of mala in se, and pacifism vs. just war in the latter field.

Abstract

A new weapon emerges on the modern battlefield. A weapon that radically changes the way wars can be fought. A weapon that challenges the notion of human control over technology and transforms the way decisions on the battlefield can and  will be made. Governed by sophisticated artificial intelligence, machines can observe, search for, loiter, engage, and destroy targets without human intervention. Once deployed, the Autonomous Weapons System (AWS) “decides” who lives to see  another day. This workshop focuses on Autonomous Weapons Systems at the frontier of automated killing and whether and how such systems can be regulated or used responsibly. The workshop presents experts with insights on the current use of AWS,  International Humanitarian Law, and the ethics of war. These scholars will constitute an interdisciplinary forum for discussions and principles, e.g. “meaningful human control”, transparency, distinction, and proportionality in decisions on life and death. Keywords. Artificial Intelligence, Ethics of war, International Humanitarian Law, Autonomous Weapons Systems, Automated killing, risk-free war, Just War Theory. 


Speaker

Ingvild Bode, University of Southern Denmark, Denmark

Dr Ingvild Bode is Professor at the Center for War Studies, University of Southern Denmark. Her research focuses on processes of normative and policy change, especially regarding the use of force. She is the Principal Investigator of the European Research Council-funded project AutoNorms: Weaponised Artificial Intelligence, Norms, and Order (08/2020-07/2025). Ingvild also serves as the co-chair of the IEEE Research Group on Issues of AI and Autonomy in Defence Systems. Her work has been published with the European Journal of International Relations, Ethics and Information Technology, Review of International Studies, International Studies Review and other journals. Ingvild’s most recent book entitled Autonomous Weapons and International Norms (co-authored with Hendrik Huelss) was published by McGill-Queen’s University Press in 2022.  

How use makes norms: Integrating autonomous and AI technologies in weapon systems

A March 2021 United Nations report argued that the Kargu-2, a one-way attack drone, has been used to strike militias in Libya autonomously. In the war in Ukraine, both sides have used similar types of drones that appear to have the latent technical capability to identify, track, select, and strike targets autonomously. Israel’s use of AI systems to generate target lists for attacks in the Gaza strip are well-documented. These examples underscore a trend toward the deployment of autonomous and AI technologies in warfare. While often discussed under the umbrella term of autonomous weapon systems (AWS), most current systems used in targeting decision-making appear to be operated with humans ‘in the loop’ to authorise attacks. But the quality of control and agency that humans can exercise is already compromised due to complex tasks and expected decision-making speed. I examine how this diminishing quality of human control results from a governance gap of AI in the military domain. In the absence of top-down governance, I argue that the use of autonomous and AI technologies in weapon systems makes norms. Practices of design, of training personnel for, and of using such weapon systems in targeting decision-making shape a social norm, defining what is considered an “appropriate” level of human control. This emerging norm accepts a diminished form of human control, thereby undercutting human agency in warfare. While there are a growing number of governance initiatives on AI in the military domain, these initiatives do not necessarily scrutinise this emerging norm.  


Speaker

Neil Renic, University of Copenhagen

Neil Renic, PhD, is a Researcher at the Centre for Military Studies at the University of Copenhagen. He is also a Fellow at the Institute for Peace Research and Security Policy at the University of Hamburg and member of the International Committee for Robot Arms Control (ICRAC). Neil is a specialist on the changing character and regulation of armed conflict, and emerging and evolving military technologies such as armed drones and autonomous weapons. He is the author of “Asymmetric Killing: Risk Avoidance, Just War, and the Warrior Ethos” (Oxford University Press 2020). Neil’s work has also featured in journals such as the European Journal of International Relations, Ethics and International Affairs, International Relations, Survival, and the Journal of Military Ethics. 

Crimes of Dispassion: Autonomous Weapons and the Moral Challenge of Systematic Killing

Systematic killing has long been associated with some of the darkest episodes in human history. Increasingly, however, it is framed as a desirable outcome in war, particularly in the context of military AI and lethal autonomy. Autonomous weapons systems, defenders argue, will not only surpass humans militarily, but morally, enabling a more precise and dispassionate mode of violence, free of the emotion and uncertainty that too often weakens compliance with the rules and standards of war. We contest this framing. Drawing on the history of systematic killing, we argue that lethal autonomous weapons systems reproduce, and in some cases, intensify the moral challenges of the past. Autonomous violence incentivises a moral devaluation of those targeted and erodes the moral agency of those who kill. Both outcomes imperil essential restraints on the use of military force. 

Speaker

Iben Yde, Royal Danish Defence College, Denmark

Iben Yde is Head of Center for Operational and International law at the Royal Danish Defence College. She is responsible for IHL training and education of Danish military personnel as well as research in the area of international law. Her own research focuses on the legal implications of new technologies including autonomous weapons systems, artificial intelligence and electronic warfare. In 2021 she edited and wrote part of the anthology Smart Krig – Militær anvendelse af kunstig intelligens (Djøf Forlag), the first Danish publication on operational, strategic, ethical and legal opportunities and challenges of military uses of AI. Additionally, Iben serves as advisor to the Danish MoD in various international fora, including the NATO Data and AI Review Board and contributes to track II dialogues on military AI as a subject matter expert on international law. 

From Principle to Practice

The existing legal framework for military operations during armed conflicts, international humanitarian law (IHL), contains a set of obligations aimed at protecting civilians and mitigating the suffering of those exposed to armed conflicts. While there is broad agreement that the technology-neutral targeting rules of IHL apply to autonomous- and AI-enabled weapons systems, it is less clear what is needed to ensure compliance with this body of law in practice. Increased autonomy and reliance on machine learning and other AI-disciplines in the critical targeting functions of weapons systems inevitably create  new types of operational and legal risks which will most likely multiply as machine and conflict complexity increase. This paper argues that the highly context-specific nature of IHL obligations calls for a risk-based approach to governance and comprehensive and rigorous risk mitigation measures tailored to meet the specific needs of each discrete system and circumstances of use, something that neither can nor should be achieved through the enactment of new treaty law alone. On this basis, the paper proceeds to identify the latest examples of international and regional cooperation around the creation and practical implementation of broader responsible AI frameworks and discuss their potential contribution to enhancement of IHL compliance. 


Speaker

Sune With Tryk Christensen, Aarhus University, Denmark

Sune With is a Ph.D.-fellow at the Department of Philosophy and the History of Ideas at, Aarhus University. Sune has disseminated his knowledge on the subject of autonomous weapons systems and the ethics of war on national radio on several occasions. 

Just War in Disarray

Does risk-free war make any sense? The traditional Just War Theory, meaning “justified war”, rests, among other things, on the underlying logic of reciprocity among combatants. War entails the risk of losing your life, and since the combatant is under obligation to be obedient till death, the combatant has the right to self-defence and the right to kill the enemy’s combatants. The Rules of Armed Conflict and International Humanitarian Law stem from the Just War theory's fundamental principles. When combatants are removed from the battlefield and replaced by drones and other autonomous weapons systems, they are no longer in physical danger. They are also shielded from the psychological stress and moral damage of killing another human being. However, this, absurd as it might sound, disturbs the fundamental principles behind the ethics of war and the notion of war as a social activity. This presentation carves out some of the dilemmas that arise when war, on the one hand, is supposed to be something “humans do to each other” as a social activity and, on the other, about preserving the lives of both combatants and civilians. Dilemmas emerge when machines replace combatants, while Just War Theory remains the underlying logic and paradigm for the Rules of Armed Conflict.