Session 13: Perceptions of Robots

Friday August 23, 9:00-9:30 CEST, Auditorium 1 (1441-011)

Stanislav Ivanov, Varna University of Management & Zangador Research Institute, Bulgaria

Dr. Stanislav Ivanov is a Professor at Varna University of Management, Bulgaria (http://www.vum.bg

) and Director of Zangador Research Institute (https://www.zangador.institute/en/). Prof. Ivanov is the Founder and Editor-in-chief of ROBONOMICS: The Journal of the Automated Economy (https://journal.robonomics.science). His research interests include robonomics, robots in tourism/hospitality, the economics of technology, automated decision-making, etc. His publications have appeared in different academic journals – Technology in Society, Foresight, etc. For more information about Prof. Ivanov please visit his personal website: www.stanislavivanov.com 

David J. Gunkel, North Illinois University, USA

David J. Gunkel (PhD Philosophy) is an award-winning educator, researcher, and author, specializing in the philosophy of technology with a focus on the moral and legal challenges of artificial intelligence and robots. He is the author of over 110 scholarly articles and has published seventeen books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Robot Rights (MIT Press 2018), Person, Thing, Robot: A Moral and Legal Ontology for the 21st Century and Beyond (MIT Press 2023), and Handbook on the Ethics of AI (Edward Elgar 2024).  

Robots Should Be Slaves: Perceptions of Bulgarians Towards Potential Robot Rights and Obligations

The paper empirically checks Gunkel’s robot rights matrix using a sample of 215 respondents in Bulgaria. It evaluates the matrix on the granular level of 26 separate rights and obligations. The findings revealed that respondents did not think that robots and AI could have rights per se or should be given rights. Robots were perceived as slaves without the rights to reproduce, own property, strike, receive a salary, vote or be elected but with the obligations to adhere to regulations and respect humans. The results were consistent for respondents with and without education in Law, Robotics, AI, or Computer science. The demographic characteristics of respondents did not shape respondents’ answers but their general attitudes towards robots and AI did. Finally, the results showed that respondents did not distinguish completely the ‘can have’ and ‘should have’ options about the respective rights and obligations. Theoretical and policy implications, limitations and future research directions are discussed as well. 


Friday August 23, 9:35-10:05 CEST, Auditorium 1 (1441-011)

Hyungrae Noh, Sunchon National University, South Korea

My research journey is founded on the conviction that philosophers of mind should actively engage with the cognitive sciences. My primary contribution to this interdisciplinary field involves evaluating philosophical theories through empirical data. This includes examining neuroscientific discoveries that challenge the relevance of the concept of phenomenal consciousness in the clinical diagnosis of the minimally conscious state (2018, No-report paradigmatic ascription …, Minds and Machines); and arguing that philosophers must critically assess ordinary language use, as psychological experiments often reveal such usage to be misleading (2023, Interpreting ordinary uses of psychological and moral terms in the AI domain, Synthese). 

Folk Understanding of Artificial Moral Agency

The functionalist conception of artificial moral agency holds that certain autonomous AI systems should be considered moral agents to the extent that human agents who are causally accountable for the morally significant actions of these AIs are deemed blameworthy or praiseworthy and may accordingly face sanctions or rewards regardless of whether these human agents intended the actions to occur (Behdadi & Munthe 2020; Floridi 2016). By meta-analyzing psychological experiments, this paper reveals a close alignment between this functionalist conception and the folk understanding of artificial moral agency: People treat certain AIs as moral agents even when they do not consider them to possess consciousness or free will (Gray et al. 2007; Thellman et al. 2017); When ordinary people attribute moral responsibility to AIs, these attributions are in fact redirected towards the users, programmers, and manufacturers of the AIs (Lima et al. 2021; Kneer & Stuart 2021; Wilson et al. 2022); This redirection holds even when ordinary people do not view the causal contributions of these human agents to the AI’s actions as wrongful (Shank & DeSanti 2018).


Friday August 23, 10:10-10:40 CEST, Auditorium 1 (1441-011)

Fiorella Battaglia, Ludwig Maximilians Universität, Munich, Germany

Fiorella Battaglia is the Head of the Laboratory for Ethics in the Wild at the Digital Humanities Centre, University of Salento, where she is also associate professor of moral philosophy in the Department of Humanities. Her research focuses on challenging ethical questions resulting from emerging technologies and climate change, which shape both our social and epistemic practices and our moral experiences. After obtaining her MA degree in Philosophy from the University of Pisa, she earned her PhD in Philosophy and Politics from the University of Naples "L'Orientale" (2004). In 2016, she completed her habilitation in Practical Philosophy and received her venia legendi from the Ludwig-Maximilians-Universität in Munich (Germany). She has also held an assistant professorship of Social Philosophy at the Berlin-Brandenburg Academy of Sciences and Humanities, at the Humboldt University in Berlin, an adjunct professorship of Epistemology at the Faculty of Medicine of the University of Pisa, and a visiting professorship at the Dirpolis and Biorobotics Institutes of the Sant’Anna School of Advanced Studies in Pisa (Italy). 

Dehumanization: An Updated Philosophical Account of Subject/Object Dualism

This paper examines the concept and practice of dehumanization, as well as potential new developments and implications for this concept that arise with the increasing use of machines, or more broadly, autonomous systems in our lives. In recent times, philosophy has shifted its focus towards investigating wrongdoings, particularly the occurrence of dehumanization, rather than pursuing ideal theory. Dehumanization refers to the perception of others as less than fully human by denying them certain uniquely human characteristics or their human essence. Previous analyses of dehumanization have focused on events involving multiple human individuals, such as genocides, civil wars, and violence against certain ethnic, racial groups or women. However, it is still necessary to address new profiles of dehumanizing behaviors in the specific human-machine relationship. This paper examines the processes of dehumanization in relationships in which a non-human being is present. Having introduced the concept of dehumanization, this paper will then proceed to expand, integrate, and transfer the notion to the context of actions mediated by autonomous systems.