WORKSHOP 10 | Thursday, August 22, 13:35 – 17:10 | Workshop Room 2 (1441-210)
David J. Gunkel (PhD Philosophy) is an award-winning educator, researcher, and author, specializing in the philosophy of technology with a focus on the moral and legal challenges of artificial intelligence and robots. He is the author of over 110 scholarly articles and has published seventeen books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Robot Rights (MIT Press 2018), Person, Thing, Robot: A Moral and Legal Ontology for the 21st Century and Beyond (MIT Press 2023), and Handbook on the Ethics of AI (Edward Elgar 2024).
The workshop is designed to address the prospects for advancing research in practical philosophy. As such, the workshop papers and presentations will address various ways in which the simulatory capacities of AI-driven social robots enhance our inclinations to attribute to them intentions, emotions, consciousness, etc. and examine how this attribution not only will but is already necessitating a range of real-world practical solutions to robot moral status and rights. In order to address these issues, the workshop will assemble a team of researchers from across the globe and from different disciplines, who can bring to this conversation a wide range of viewpoints and methods of investigation. By doing so, the workshop will facilitate and stage a wide-ranging conversation about the practical aspects of robot rights that will help conference attendees not only understand the current state of research and development in this area but also assist them in formulating their own thinking about and research into these important and timely matters.
Aybike Tunç is a legal scholar specializing in IT law and holds a PhD from Ankara Hacı Bayram Veli University. She has authored books and articles on technology law, intellectual property, and AI. Currently, she is an Assistant Professor at Hacı Bayram Veli University. Aybike has presented at various conferences on topics such as personal data protection and the legal aspects of AI. Her teaching experience includes courses in Law of Obligations, Property Law, and Intellectual Property Law. Her work focuses on the intersection of law and emerging Technologies.
Legal rights for autonomous systems have been debated since the term "robot" was introduced in Capek's R.U.R. Today, with technological and legal advancements, the discussion on AI's legal rights has moved beyond science fiction to academic and legislative arenas.
While debates often focus on the sentience or consciousness of AI, legal subjectivity is not solely tied to these traits. For instance, despite being unconscious, newborns have legal rights, while sentient beings like chimps may not.
Arendt argues that legal subjectivity results from a social contract that emerges when individuals come together based on trust and mutual equality. Therefore, it is crucial to admit non-human beings, including AI systems, into communities such as states or international organizations for them to have legal rights. The legal subjectivity of AI, like other non-human entities, depends on people including AI systems in their network of relationships. Without such acceptance, legal rights for AI will remain elusive, regardless of the consciousness of AI.
Kamil Mamak is a philosopher and a lawyer. He is a postdoctoral researcher at the RADAR group at the University of Helsinki and an assistant professor at the Department of Criminal Law at the Jagiellonian University. He has authored 3 book monographs and more than 30 peer-reviewed journal articles and contributed chapters. He received a research grant from the National Science Center in Poland.
This paper discusses the impact of morphology on the legal situation of robots. I will argue that choosing human-like morphology puts robots in a privileged position compared to robots with non-human morphologies. Robots with the same types of features but in different shapes should be treated differently. I will focus on the representational aspects of robots, showing that human shape in the robot design brings extra burdens on users. Mistreating humanoid robots could be degrading for humans, who such behaviors could indirectly and sometimes directly harm. To ensure the safety of humans, humanoid robots should be protected more than other types of robots.
Federico Cabitza is an associate professor at the University of Milano-Bicocca and director of the local node of the national laboratory “Computer Science and Society.” Since 2016, he has collaborated with several hospitals (incl. IRCCS Galeazzi Orthopaedic Institute in Milan) and founded the Medical Artificial Intelligence Laboratory. He is an associate editor of the International Journal of Medical Informatics. He is listed among Stanford’s Top 2% Scientists: his research (spanning over 160 published works) focuses on the design and evaluation of AI systems for decision support and their organizational impact. He co-authored “Artificial Intelligence, the use of new machines” with Luciano Floridi, published by Bompiani.
This paper explores the concept of robot rights as societal imperatives—constraints established by collective normative authority. We argue that rights, including those of robots, are societal regulations rather than inherent individual attributes like autonomy or consciousness. These rights aim to regulate behavior for communal harmony and reflect collective power, with right holders acting as proxies. For instance, rights for sex robots arise from concerns about normalizing abusive behaviors toward humans.
We propose that granting rights to robots, akin to any object forming social bonds with humans, is part of human self-domestication. This perspective diverges from the “should-could” framework of the robot rights debate, suggesting that robot rights facilitate orderly coexistence in complex societies where even objects can interact.
Our thesis emphasizes that expanding normative scope goes beyond being merely a restriction of individual freedom, representing also a naturalization of societal limitations for mutual coexistence. The growing collective control and authority represented by robot rights illustrate human cultural evolution aimed at maintaining societal stability against disruptive human tendencies.
Autumn Edwards is a Professor of Communication at Western Michigan University, co-director of the Communication & Social Robotics Labs, and founding Editor-in-Chief of Human-Machine Communication. Her research examines the worldviews, expectations, impressions, and message strategies people bring to communication with social robots and artificial partners.
David J. Gunkel (PhD Philosophy) is an award-winning educator, researcher, and author, specializing in the philosophy of technology with a focus on the moral and legal challenges of artificial intelligence and robots. He is the author of over 110 scholarly articles and has published seventeen books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Robot Rights (MIT Press 2018), Person, Thing, Robot: A Moral and Legal Ontology for the 21st Century and Beyond (MIT Press 2023), and Handbook on the Ethics of AI (Edward Elgar 2024).
This paper examines American pragmatism as a lens through which to navigate the landscape of robot rights, bridging theory and practice. We draw inspiration from pragmatism's commitment to resolving seemingly interminable philosophical dilemmas and its unapologetic drive for amelioration, particularly relevant in today's landscape where emerging social technologies hold the potential to either alleviate or exacerbate issues of abuse and inequity. Although the pragmatist tradition encompasses varied figures—e.g., Pierce, James, Dewey, Rorty, and West—, there are broad points of convergence among its major strains, including pluralism, nondualism, an emphasis on the materiality of language, relationality, and meliorism. These tendencies establish links to the “relational turn,” underscoring the importance of relational entanglement and social context to considerations of robot rights. An intriguing aspect we delve into is the pragmatist conflation of truth with goodness, blurring the lines between 'ought' and 'is.' We pose questions about what it means to determine what is practically expedient or favorable in our thoughts and actions regarding the social meaning and treatment of robots. Further, we explore issues of inclusivity, ethical considerations, the contingency of truths in discursive systems, and the construction of a pragmatic methodology tailored to the realm of robot rights
Kęstutis Mosakas is a philosophy researcher at Vytautas Kavolis Transdisciplinary Research Institute of Vytautas Magnus University. Previously, he worked as a junior researcher in the EU-funded research project “Integration study on Future Law, Ethics, and Smart Technologies” (2018-2022). Since 2023, he has been an associate editorial board member for the journal AI & Society. Kęstutis holds a bachelor’s degree in English philology (2013, VMU), master’s degree in practical philosophy (2017, VMU), and a PhD in philosophy (2023, VMU). Kęstutis’ main research area is applied ethics, with particular focus on questions related to the moral status of robots and human-robot interaction. His other areas of interest include meta-ethics, philosophy of religion, and consciousness-related questions in ethics. Kęstutis’ most significant work is his PhD thesis-based monograph titled Rights for Intelligent Robots? that is currently in press at Palgrave Macmillan
While the discourse on robot rights has gained scholarly attention in recent years, it remains a subject of polarization. Skeptics often contest the idea that robots could possess moral status and rights, while proponents argue that artificial entities might evolve into “suprapersons” endowed with even stronger rights than those of humans. This paper aims to bring clarity to this debate by critically examining the underlying assumptions on both sides, particularly within the framework of the human rights approach. To achieve this objective, three aspects of the problem are considered: 1) The foundational assumptions underpinning human rights; 2) The implications of these assumptions for future robots; and 3) The main practical challenges associated with recognizing robots as bearers of human rights. The analysis suggests that intelligent robots could, theoretically, acquire moral rights in general and human rights in particular under certain plausible assumptions. However, the substantial dissimilarities between humans and robots (e.g., in terms of the internal architecture) pose significant epistemic challenges. These challenges are bound to complicate the practical application of the approach, making it difficult to distinguish genuine artificial rights-holders from mere robotic moral zombies.