David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters and has published thirteen internationally recognized books. He currently holds the position of Presidential Research, Scholarship and Artistry Professor in the Department of Communication at Northern Illinois University (USA).
One of the important opportunities/challenges of social robots is deciding whether these artifacts are and can be treated as things that we (human beings) can use and even abuse as we decide and see fit? Whether they would, due to their specific social circumstances and interpersonal contexts, require some level of personification and even the extension of some aspects of moral or legal personality? Or whether these technological innovations go further and challenge the very categories of person and thing, inviting us to rethink one of the fundamental organizing principles of our social ontology? This workshop is designed to address these questions. We have assembled a team of researchers from across the globe and from different disciplines, who can bring to this conversation a wide range of viewpoints and methods of investigation. The objective of the effort is not to advance one, definitive solution but to map the range of possible answers and critically evaluate their significance.
Diana Mădălina Mocanu is a PhD student, recipient of the FRESH scholarship of the National Scientific Research Fund (FNRS) of Belgium. She is currently pursuing her PhD under the guidance of Prof. Christophe Lazaro, at the UCLouvain, where they recently created the Cellule de recherche interdisciplinaire sur la société numérique (CRINUM).
Wesley Newcomb Hohfeld postulated, synthesizing seemingly unbudging legal tradition, that law is about (a finite set of) relationships between humans. First animals and now, increasingly, robots make us question this. This paper will discuss in some detail the ways in which the law might accommodate some of the relationships that humans have with robots. These relationships can vary greatly as to their degree of closeness between the parties, ranging from the rather detached, in which robots are seen as tools (which arguably forms the great majority of cases now and, some argue, should always be so), to quite up close and personal as is the case with certain robots which are seen as companions or partners (increasingly being reported as a trend we are moving towards), to even seeing them as part and parcel, as extensions of our own person or body parts. Since our relationship to our tools has been dealt with by law and is largely uncontroversial, the current article will focus on human-robot collaborations and what legal shape they may take, exploring available legal avenues, as well as innovations in terms of legal status.
Jesse de Pagter is a PhD candidate at TU Wien. He has a background in Science and Technology Studies and Philosophy. His focus is on the study of autonomous technologies in their sociopolitical context. He critically analyzes the different narratives that are arising in anticipation of the increasing implementation of autonomous technologies.
The place of social robots in our social institutions is currently an important topic of discussion. An issue that rises regularly concerns the speculative character of several of the arguments in that discussion. This is unsurprising, as many of those arguments refer to future potentialities. For instance, the argument for robot rights has led to heavy debates on the usefulness of this speculative notion. In this contribution, the goal is to reflect on the role of speculative concepts in the field of robot ethics. The goal of this contribution is first of all to examine how robot ethics as a field is engaged in the development of speculative arguments. As a part of this, the speculative components in robotics narratives are reviewed. Furthermore, the contribution zooms in on the discussion around social robots while elaborating on different issues that can be seen as constitutive for improving speculative robot ethics. Finally, the goal is to provide new directions for further engagement with the contingent futures of social robots.
Dane Leigh Gogoshin received her M.A. in philosophy and cognitive science at the University of Houston. She is currently a doctoral researcher in the RADAR Group in the Practical Philosophy Department at the University of Helsinki where she is working on a critique of the moral responsibility system and studying ways in which we can exercise and improve our moral and rational agency.
It is typically argued that robots cannot meet the conditions of moral responsibility. Thus, where robots are involved in morally significant harm, troubling responsibility gaps are thought to arise (Matthias 2004). In this paper, it is argued that the responsibility gap concept is itself founded on false premises – that traditional settings afford us with clearcut moral culprits whom it is fair and beneficial to hold accountable and that our responsibility practices are straightforwardly morally and socially desirable. There are morally and socially desirable outcomes of our responsibility practices worth fighting for – dependable, responsible social behavior, acts of repair, restoration, and reformation. However, not only can these outcomes be extended beyond traditional contexts to technologically advanced domains, they can be enhanced as well.
Maciej Musiał works as an Associate Professor at the Faculty of Philosophy at Adam Mickiewicz University in Poznań (Poland). He is an author of the book Enchanting Robots. Intimacy, Magic and Technology (Palgrave Macmillan 2019).
This contribution hypothesizes that humans someday will design and develop robots that will be recognized as persons with moral status and rights analogous to those of human beings. This assumption is most often discussed in terms of robots’ moral agency and in terms of how this agency should be shaped to protect the well-being of humans. However, here, I would like to focus on robot persons’ moral patiency and developing them in a way that protects their well-being. In particular, I would like to examine the process of designing such artificial persons in the context of the aforementioned dilemma. The presentation does not offer any final solutions but rather signalizes the relevance and complexity of questions such as: (1) should robot persons be designed as servants?, (2) should they be made to experience childhood? and (3) should they be made for profit? Thus, this contribution problematizes the main question of the workshop by presenting the potential consequences of choosing one of its possible answers.
After graduating from Ankara University Faculty of Law in 2010, Dr. Aybike Tunç subsequently started her master's degree at Ankara University Social Sciences Institute and completed her graduate studies at Gazi University Social Sciences Institute in 2013. She completed her PhD at the Graduate Education Institute of Ankara Hacı Bayram Veli University in 2020.
It is a fact that today artificial intelligence is at an indispensable point for human life. Even though robots have a history of only 100 years, it is not possible to imagine a life without robots and artificial intelligence technology. This indispensable technology is of course used in legal relations, as well as in every field. For this reason, legal systems have to determine the legal status of artificial intelligence and decide whether a new personhood recognition is needed for this technology. Therefore this research aims to answer the questions whether it is possible to confer a new kind of personhood to artificial intelligence as well as whether there is a legal need for it.
Henrik Skaug Sætra is a political scientist working in the Faculty of Computer Science, Engineering and Economics at Østfold University College. Sætra has a particular interest in political theory and philosophy and worked extensively on Thomas Hobbes and social contract theory, environmental ethics and game theory. His most recent books are Big Data's Threat to Liberty (Elsevier 2021) and AI for the Sustainable Development Goals (CRC Press 2022).
According to a typical definition, a social institution is “a complex of positions, roles, norms and values lodged in particular types of social structures and organising relatively stable patterns of human activity with respect to fundamental problems in producing life-sustaining resources, in reproducing individuals, and in sustaining viable societal structures within a given environment.” References to humans and life aside, this provides several important questions, in which I focus on two: First, can robots hold significant positions or have roles? Second, are they subject to norms and values, and do they take part in the social construction of the same? After answered these questions partially in the affirmative, I examine the requirements that must be met before robots can be perceived as fully in social institutions. These requirements show that if they are met, social robots can indeed be in social institutions conducive to meeting fundamental needs and sustaining viable societal structures. I argue that the most important requirements are design choices that can be met with existing technologies. Meeting the requirements would potentially allow social robots a different social standing, but designers and regulators also have good reasons not to limit the robots in such a way.
Kamil Mamak is a philosopher and a lawyer. He is a postdoctoral researcher at the RADAR group at the University of Helsinki and an assistant professor at the Department of Criminal Law at the Jagiellonian University. He has authored 3 book monographs and more than 30 peer-reviewed journal articles and contributed chapters. He received a research grant from the National Science Center in Poland.
At least in some legal systems, police officers performing their actions are protected by the law from the attack on them. The legal protection against attack on them is a different form of protection that every human being possesses. We could say that an extra "layer" of protection is connected with performing police duties and do not root in the fact that police officers are humans. Police robots, like any other robots, are not humans, but they can perform police tasks. Should they deserve that additional "layer" of protection resulting from the fact that they do police duties? In this paper, I answer positively to this question.
Anne Gerdes is an Associate Professor at the Department of Design and Communication at the University of Southern Denmark and head of the Humanities Ph.D. School’s Research Training Programme in Design, IT and Communication. She is a member of the ITI research group. She researches and teaches at the intersections of philosophy, computational technologies, and applied ethics. Her research focuses on AI and ethics, explainable AI, machine ethics, robot ethics, Ethics by Design, and privacy. Anne Gerdes is highly experienced working in cross-disciplinary fields with computer scientists and engineers.
Robots are increasingly enslaving humans, and therefore, we ought to cease discussing whether robots should be included in the moral circle. Instead, it is time to face the challenges of neo-tayloristic robot tyranny. It is time to replace the relational turn with a Luddite turn.
David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 80 scholarly articles and book chapters and has published twelve internationally recognized books, including Thinking Otherwise: Philosophy, Communication, Technology (Purdue University Press 2007), The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA). More info at www.gunkelweb.com
One possible, if not surprising, solution to the exclusive person/thing dichotomy is slavery, insofar as the slave, since Romans times, has occupied a social position that is both/and and neither/nor. Associating robots with slavery and drawing on the history of human servitude to provide a moral and legal framework for dealing with the challenges of socially interactive and intelligent artifacts has become a rather wide-spread practice in the existing literature. But it is one that is fraught with numerous problems and unintended consequences. This paper will critique the “robots should be slaves” proposal, demonstrating how this proposed solution to the person/thing dichotomy is not only no solution but produces more problems than it can resolve.