Responsibility and Moral Standing

 SESSION 3 | Wednesday, August 21, 10:25 – 12:05 | Auditorium 3 (1441-113)


Wednesday August 21, 10:25-10:55 CEST, Auditorium 3 (1441-113)

Jörg Löschke, University of Zurich, Switzerland

Jörg Löschke is an SNSF-funded research professor at the University of Zurich, Switzerland. He has held positions at the University of Bonn, the University of Berne, and the University of Munich, as well as visiting positions at Florida State University, Princeton University, and the University of Washington. His main research are the ethics of relationships, analytical value theory, and the deontology-consequentialism divide. He is currently focusing on a research project on valuable relationships between humans and AI systems. 

Moral Obligations Towards Social Robots

Is it possible for humans to have moral obligations to treat social robots in decent ways? This is one of the most important questions in robophilosophy because erring here could lead to the systematic violation of moral obligations and therefore to a moral catastrophe. The three most commonly held accounts to explain moral obligations towards robots are rights-based accounts, indirect accounts, and relational accounts. After pointing out some problems with each of these accounts, this paper sketched a novel approach to explain moral obligations towards social robots, which I call the associative account. According to this view, moral obligations towards robots should be understood as associative duties: they are duties that exist in virtue of valuable relationships with robots. This makes it possible to have duties that are directed towards robots without these robots having full moral standing, as the normative basis for these obligations is the value of the relationship with a robot. After explaining the basic idea of the associative account, I discuss the conditions that must be met for humans to have valuable relationships with robots.  


Wednesday August 21, 11:00-11:30 CEST, Auditorium 3 (1441-113)

M. Hadi Fazeli, University of Gothenburg, Sweden

In the Lund Gothenburg Responsibility Project (LGRP), I focus on time and responsibility. My research investigates factors that reduce an individual’s moral responsibility for past actions, relevant to personal identity over time and the fittingness of blame after significant time has passed. This includes scenarios where perpetrators have changed drastically or societal norms have shifted. I am also interested in the responsibility assigned to artificial intelligence systems and robots, and whether we can adopt fitting emotions toward them. 

Praising and Blaming Robots

Humanoid robots are helping us in our daily lives, and given their human-like attributes, it feels natural for us to direct emotions toward these robots—praising them for their successes and blaming them for their failures. But is it fitting to do so? I am examining this philosophical question by using a functionalist approach to fittingness. By distinguishing fitting attitudes in a moral sense from fitting attitudes in a non-moral sense, I argue that our reactive emotions towards humanoid robots can be non-morally fitting when they serve purposes such as repudiating a threat or correcting a mistake. When praise and blame help us achieve specific goals, it becomes non-morally fitting to feel, express, and manifest these emotions in our interactions with robots. Finally, I address objections from opposing perspectives and place my argument within the broader philosophical discussion on fittingness. 


Wednesday August 21, 11:35-12:05 CEST, Auditorium 3 (1441-113)

Ziggy O'Reilly, University of Turin, Italy

Ziggy O’Reilly is a PhD student at the Social Cognition in Human-Robot Interaction line at the Italian Institute of Technology and University of Turin, Italy. She completed a Master of Biological Arts at SymbioticA, the University of Western Australia, during which she collaborated with the Australian e-Health Research Centre, CSIRO and was a Joint Yale-Hastings Visiting Scholar. Her main research interests are in moral psychology, social robotics, robot ethics, human-robot interaction, narrative psychology and social cognition.

Serena Marchesi, University of Padua, Italy

Serena Marchesi is a postdoctoral researcher at the University of Padua, Department of Developmental and Social Psychology (DPSS). Previously she was a postdoctoral researcher at the Social Cognition in Human-Robot Interaction (S4HRI) group, coordinated by Dr. Agnieszka Wykowska. Serena obtained her Ph.D. from the University of Manchester, in collaboration with IIT under the supervision of Prof. Angelo Cangelosi and Dr. Agnieszka Wykowska. Serena’s research interests focus on the social and moral cognition processes involved in human-human and human-robot interaction, and how individual and cultural differences can affect them.  

Agnieszka Wykowska, University of Turin, Italy

Professor Agnieszka Wykowska leads the unit S4HRI “Social Cognition in Human-Robot Interaction” at the Italian Institute of Technology (Genoa, Italy), where she is also the Coordinator of CHT (the Center for Human Technologies). At IIT, she is also member of the Board of the Scientific Director. In addition, she is an adjunct professor of Engineering Psychology at the Luleå University of Technology. In 2016 she was awarded the ERC Starting grant “InStance": "Intentional Stance for Social Attunement”. In her research, she combines cognitive neuroscience methods with human-robot interaction in order to understand the human brain mechanisms in interaction with natural and artificial agents. 

Justifications of Moral Responsibility Attributions Towards a Humanoid Robot

As robots increasingly participate in society it is crucial to understand which factors influence the degree to which individuals attribute moral responsibility towards them. In a previously reported study, participants read vignettes about a humanoid robot causing either a negative or a positive consequence. Participants rated the moral responsibility and intentionality of the robot, and explained the reasoning behind their ratings. In this paper we (1) conducted an exploratory cluster analysis on their textual justifications of their moral responsibility ratings using word-embeddings and (2) investigated the correlation between moral responsibility ratings and the proportion of words participants use from each cluster. We found that participants who used more words from the “event detection” cluster were more likely to attribute higher moral responsibility to the humanoid robot. Conversely, those who used more words from the “mechanistic properties” cluster tended to attribute less moral responsibility to the humanoid robot. These findings illustrate that moral responsibility attributions could be influenced by the proportion to which individuals refer to events and mechanistic properties.