WORKSHOP 7 | Wednesday, August 21, 13:35 – 16:35 | Workshop Room 2 (1441-210)
Sune With Tryk Christensen is a Ph.D.-fellow at the Department of Philosophy and the History of Ideas at Aarhus University, Denmark. His PhD project, entitled ”Death by Algorithm”, investigates the changing perspective in the conceptualization of war caused by autonomous weapons systems, their effects on the moral character of soldiers and our perception of the enemy.
Morten Dige works in the fields of bioethics, professional ethics, research ethics, and the ethics of war. He has published articles on drone killing, robotic warfare, torture, the principle of mala in se, and pacifism vs. just war in the latter field.
A new weapon emerges on the modern battlefield. A weapon that radically changes the way wars can be fought. A weapon that challenges the notion of human control over technology and transforms the way decisions on the battlefield can and will be made. Governed by sophisticated artificial intelligence, machines can observe, search for, loiter, engage, and destroy targets without human intervention. Once deployed, the Autonomous Weapons System (AWS) “decides” who lives to see another day. This workshop focuses on Autonomous Weapons Systems at the frontier of automated killing and whether and how such systems can be regulated or used responsibly. The workshop presents experts with insights on the current use of AWS, International Humanitarian Law, and the ethics of war. These scholars will constitute an interdisciplinary forum for discussions and principles, e.g. “meaningful human control”, transparency, distinction, and proportionality in decisions on life and death. Keywords. Artificial Intelligence, Ethics of war, International Humanitarian Law, Autonomous Weapons Systems, Automated killing, risk-free war, Just War Theory.
Sune With is a Ph.D.-fellow at the Department of Philosophy and the History of Ideas at, Aarhus University. Sune has disseminated his knowledge on the subject of autonomous weapons systems and the ethics of war on national radio on several occasions.
Does risk-free war make any sense? The traditional Just War Theory, meaning “justified war”, rests, among other things, on the underlying logic of reciprocity among combatants. War entails the risk of losing your life, and since the combatant is under obligation to be obedient till death, the combatant has the right to self-defence and the right to kill the enemy’s combatants. The Rules of Armed Conflict and International Humanitarian Law stem from the Just War theory's fundamental principles. When combatants are removed from the battlefield and replaced by drones and other autonomous weapons systems, they are no longer in physical danger. They are also shielded from the psychological stress and moral damage of killing another human being. However, this, absurd as it might sound, disturbs the fundamental principles behind the ethics of war and the notion of war as a social activity. This presentation carves out some of the dilemmas that arise when war, on the one hand, is supposed to be something “humans do to each other” as a social activity and, on the other, about preserving the lives of both combatants and civilians. Dilemmas emerge when machines replace combatants, while Just War Theory remains the underlying logic and paradigm for the Rules of Armed Conflict.
Rob Sparrow is a Professor in the Philosophy Program, and a Chief Investigator in the Australian Research Council Centre of Excellence for Electromaterials Science, at Monash University, where he works on ethical issues raised by new technologies. He has published on topics as diverse as the ethics of robotics, the moral status of AIs, human enhancement, stem cells, preimplantation genetic diagnosis, xenotransplantation, and migration. He is a co-chair of the IEEE Technical Committee on Robot Ethics and was one of the founding members of the International Committee for Robot Arms Control.
The Israeli use of an AI system called Lavender to identify targets for bombing in Gaza might plausibly be held to mark the beginning of the age of “Minotaur Warfare”. Paul Scharre has argued that the future of war will be dominated by so-called “centaur warfighting”: teams of robots directed by human beings. The image of the centaur, a mythical beast with the body of a horse and the head and upper torso of a man, emphasises that a human being will be in charge of the manned-unmanned teams of the future. In a paper published in Parameters in 2023 [https://press.armywarcollege.edu/parameters/vol53/iss1/14], Adam Henschke and I argued that the future of war is more likely involve what we call Minotaur warfighting: teams of human beings directed by AI. The Minotaur was a mythological beast with the body of a man and the head of a bull. The image of the Minotaur highlights that the manned-unmanned teams of the future are more likely to have a “monstrous” head rather than a monstrous body. The use of Lavender in Gaza provides at least some evidence that we were right.
In this presentation I rehearse the considerations that led us to conclude that minotaur warfighting will outcompete centaur warfighting in the wars of the future and offer some initial thoughts on the ethics of putting AI in effective command of human warfighters. In particular, I argue that pressure to adopt minotaur warfighting is likely to move some military ethicists to reconsider their attitudes towards autonomous weapon systems.
Dr Ingvild Bode is Professor at the Center for War Studies, University of Southern Denmark. Her research focuses on processes of normative and policy change, especially regarding the use of force. She is the Principal Investigator of the European Research Council-funded project AutoNorms: Weaponised Artificial Intelligence, Norms, and Order (08/2020-07/2025). Ingvild also serves as the co-chair of the IEEE Research Group on Issues of AI and Autonomy in Defence Systems. Her work has been published with the European Journal of International Relations, Ethics and Information Technology, Review of International Studies, International Studies Review and other journals. Ingvild’s most recent book entitled Autonomous Weapons and International Norms (co-authored with Hendrik Huelss) was published by McGill-Queen’s University Press in 2022.
A March 2021 United Nations report argued that the Kargu-2, a one-way attack drone, has been used to strike militias in Libya autonomously. In the war in Ukraine, both sides have used similar types of drones that appear to have the latent technical capability to identify, track, select, and strike targets autonomously. Israel’s use of AI systems to generate target lists for attacks in the Gaza strip are well-documented. These examples underscore a trend toward the deployment of autonomous and AI technologies in warfare. While often discussed under the umbrella term of autonomous weapon systems (AWS), most current systems used in targeting decision-making appear to be operated with humans ‘in the loop’ to authorise attacks. But the quality of control and agency that humans can exercise is already compromised due to complex tasks and expected decision-making speed. I examine how this diminishing quality of human control results from a governance gap of AI in the military domain. In the absence of top-down governance, I argue that the use of autonomous and AI technologies in weapon systems makes norms. Practices of design, of training personnel for, and of using such weapon systems in targeting decision-making shape a social norm, defining what is considered an “appropriate” level of human control. This emerging norm accepts a diminished form of human control, thereby undercutting human agency in warfare. While there are a growing number of governance initiatives on AI in the military domain, these initiatives do not necessarily scrutinise this emerging norm.
Dr Elke Schwarz is Reader (Associate Professor) in Political Theory at Queen Mary University London. Her research focuses on the intersection of ethics of war and ethics of technology with an emphasis on unmanned and autonomous / intelligent military technologies and their impact on the politics of contemporary warfare. She is the author of Death Machines: The Ethics of Violent Technologies (Manchester University Press), a Fellow of the Royal Society of the Arts and a member of the International Committee for Robot Arms Control (ICRAC). She is also 2022/23 Fellow at the Center for Apocalyptic and Post-Apocalyptic Studies (CAPAS) in Heidelberg and 2024 Leverhulme Research Fellow with a project on the politics of Apocalyptic Artificial Intelligence.
Systematic killing has long been associated with some of the darkest episodes in human history. Increasingly, however, it is framed as a desirable outcome in war, particularly in the context of military AI and lethal autonomy. Autonomous weapons systems, defenders argue, will not only surpass humans militarily, but morally, enabling a more precise and dispassionate mode of violence, free of the emotion and uncertainty that too often weakens compliance with the rules and standards of war. We contest this framing. Drawing on the history of systematic killing, we argue that lethal autonomous weapons systems reproduce, and in some cases, intensify the moral challenges of the past. Autonomous violence incentivises a moral devaluation of those targeted and erodes the moral agency of those who kill. Both outcomes imperil essential restraints on the use of military force.
Iben Yde is Head of Center for Operational and International law at the Royal Danish Defence College. She is responsible for IHL training and education of Danish military personnel as well as research in the area of international law. Her own research focuses on the legal implications of new technologies including autonomous weapons systems, artificial intelligence and electronic warfare. In 2021 she edited and wrote part of the anthology Smart Krig – Militær anvendelse af kunstig intelligens (Djøf Forlag), the first Danish publication on operational, strategic, ethical and legal opportunities and challenges of military uses of AI. Additionally, Iben serves as advisor to the Danish MoD in various international fora, including the NATO Data and AI Review Board and contributes to track II dialogues on military AI as a subject matter expert on international law.
The existing legal framework for military operations during armed conflicts, international humanitarian law (IHL), contains a set of obligations aimed at protecting civilians and mitigating the suffering of those exposed to armed conflicts. While there is broad agreement that the technology-neutral targeting rules of IHL apply to autonomous- and AI-enabled weapons systems, it is less clear what is needed to ensure compliance with this body of law in practice. Increased autonomy and reliance on machine learning and other AI-disciplines in the critical targeting functions of weapons systems inevitably create new types of operational and legal risks which will most likely multiply as machine and conflict complexity increase. This paper argues that the highly context-specific nature of IHL obligations calls for a risk-based approach to governance and comprehensive and rigorous risk mitigation measures tailored to meet the specific needs of each discrete system and circumstances of use, something that neither can nor should be achieved through the enactment of new treaty law alone. On this basis, the paper proceeds to identify the latest examples of international and regional cooperation around the creation and practical implementation of broader responsible AI frameworks and discuss their potential contribution to enhancement of IHL compliance.
Neil Renic, PhD, is a Researcher at the Centre for Military Studies at the University of Copenhagen. He is also a Fellow at the Institute for Peace Research and Security Policy at the University of Hamburg and member of the International Committee for Robot Arms Control (ICRAC). Neil is a specialist on the changing character and regulation of armed conflict, and emerging and evolving military technologies such as armed drones and autonomous weapons. He is the author of “Asymmetric Killing: Risk Avoidance, Just War, and the Warrior Ethos” (Oxford University Press 2020). Neil’s work has also featured in journals such as the European Journal of International Relations, Ethics and International Affairs, International Relations, Survival, and the Journal of Military Ethics.
Technology has always played a vital role in the Western conception of itself. It has firstly functioned as an identifier: technological mastery was and continues to be drawn upon as a marker of “civilized peoples” and “civilized warfare”. Today, we see this expressed through the Western emphasis on technologically-enabled battlefield precision, and “humanely” delivered violence. Technology has also functioned as safeguard: technological advancement has been valued by Western actors as a way to militarily contest and dominate others, including those deemed “uncivilized”. Though linked through a shared assumption of civilizational superiority, these twin features of technology have historically produced a tension, between violence with limits and violence without. In this article, I evaluate both features of technology in relation to the ongoing pursuit of military AI and autonomous weapons. Imaginaries of Western civilization, I argue, infuse the framing of military AI as a humanizer of war, while also giving license to its unrestrained development and use.