WORKSHOP 8 | Thursday, August 22, 10:25 -12:05 | Workshop Room 2 (1441-210)
Silvia Larghi is a PhD student at the RobotiCSS Lab - Laboratory of Robotics for the Cognitive and Social Sciences - University of Milano-Bicocca. Her research interests concern philosophy of artificial intelligence and robotics, philosophy of cognitive sciences, mental states attribution to robots. Her doctoral research focuses on human's psychological explanations of robot’s behavior.
This workshop (panel) investigates the attribution of mental states and cognition to robots from a philosophical perspective, taking into account epistemological, ethical and technological (design) dimensions. After a brief introduction on the panel’s topic, the first talk will lay the groundwork by exploring the different styles people may adopt to model the mind of robots. On these grounds, the second talk will focus on the role that emotion attribution to robots has in shaping our interactions with social robots. The third talk will deal with robots’ decision-making capabilities in the context of social assistive robotics, with an eye to ethical implications. The fourth talk will close the panel, investigating how an enactive conception of intentionality impacts both our understanding of human-robot interaction and the design of robotic interfaces and architectures.
Silvia Larghi is a PhD student at the RobotiCSS Lab - Laboratory of Robotics for the Cognitive and Social Sciences - University of Milano-Bicocca. Her research interests concern philosophy of artificial intelligence and robotics, philosophy of cognitive sciences, mental states attribution to robots. Her doctoral research focuses on human's psychological explanations of robot’s behavior.
Edoardo Datteri is a Professor of Logic and Philosophy of Science at the University of Milano-Bicocca, and director of the RobotiCSS Lab (Laboratory of Robotics for the Cognitive and Social Sciences). His work focuses on how robots, computer simulations, and bionic systems can contribute to our understanding of animal behavior and cognition. This research touches on crucial issues related to scientific explanation and modeling within the field of cognitive science.
Research on how people understand and explain the behavior of robots has often focused on the attribution of mental states to the system (Thellman et al., 2022 for a review), as in de Graaf and Malle (2019). A fundamental reference on whether and how people produce mentalistic explanations of the behavior of artificial agents is Dennett (1971, 1987). In Dennett’s framework, adopting the intentional stance towards a system consists in attributing to the system beliefs, desires, intentions and other propositional attitudes to explain and predict its behavior (Perez Osorio & Wykowska, 2020; Marchesi et al., 2019). In this work it is suggested that people may adopt an explanatory strategy that differs substantially from the intentional stance and is more in line with cognitivist accounts of mind. This explanatory and predictive style, which will be called here 'folk cognitivist', involves the functional decomposition of the robotic system in modules processing representations. This claim will be supported with reference to explanations of robotic behaviors acquired in the framework of a Braitenberg-style robo-ethological project carried out with children. It will be also claimed that the folk cognitivist stance cannot be simply equated with the design stance as defined by Dennett.
Giacomo Zanotti is a postdoctoral researcher in philosophy of science and technology at the Politecnico di Milano (Italy), within the national research project BRIO – Bias, risk and Opacity in AI. His research interests lie at the intersection of the philosophy of science, the philosophy of AI and the philosophy of mind.
Marco Facchin is a postdoctoral researcher in philosophy of cognitive science at the University of Antwerp (Belgium, FWO Grant 1202824N). His research interests lie at the intersection of philosophy of mind, philosophy of cognitive (neuro)science and philosophy of science. His recent works focus on extended cognition and mental representations, which he investigates from a multidisciplinary perspective.
Affective social robots are increasingly employed in a number of contexts, from educational settings and elderly care to less controlled environments such as shopping malls, and it is reasonable to assume that they will be increasingly integrated into many people’s lives. When it comes to emotionally-driven interactions, these systems’ working is typically based on the mimicking of human emotional behavior and expressions. To this end, the fact that social robots do not truly have emotions has to fade in the background. Even if users might remain rationally aware of the emotionless nature of robots, the quality of their experience crucially depends on their interacting with the systems in question as if they actually had emotional and affective states. In Facchin & Zanotti (2024), this mechanism was conceptualized through the notion of emotional transparency, that however was not further defined or explored into detail. This work aims to further explore emotional transparency in social robots by providing a rigorous definition of it and showing how it is a direct and hardly avoidable – and yet ethically and normatively problematic – consequence of widely adopted design principles in AI and robotics.
Ilaria Alfieri is a PhD student at IULM University, Milan (Italy), within the Green Ecobotics Program. Her doctoral research focuses on the use of social robots as tools for sustainability and targets innovative ways of human-robot interaction promoting a sustainable lifestyle for the users.
Maria Raffa is a PhD student at IULM University, Milan (Italy), within a Green Programme on artificial intelligence and sustainability. Her background is in philosophy of science and data science, and her doctoral research focuses on mind cognition models and machine learning algorithms for sustainability.
The aim of this contribution is to present and discuss the ethical aspects of decision making (DM) in social assistive robotics (SAR). The literature has extensively addressed the ethical issues related to SAR, especially concerning the trust and justification in choices and actions made by the robotic agent towards the user and the external environment (Boada et al. 2021; Alaieri & Vellino, 2016). However, it is worth considering also the deeper level of DM technical implementation. In this field, the active inference (AIF) model has recently been considered attractive for implementing explainable AI for DM (Albarracin et al., 2023), with potentially groundbreaking implications for robotics. Indeed, AIF is particularly effective for complex cognitive tasks where the dynamics of the robot are uncertain, such as human-robot interaction, and especially in SAR (Da Costa et al. 2022). All that considered, in order to provide an all-round account of the issue, this contribution is structured as follows: first, the ethical implications of DM in SAR are presented and discussed. After that, the AIF model for DM and its applications in robotics and specifically in SAR are presented. Finally, conclusions are drawn.
Martina Bacaro is a PhD student in Philosophy, Science, Cognition and Semiotics (37th cycle) at the Department of Philosophy (FILO) and a research fellow at DISI (Department of Computer Science) of the University of Bologna – Alma Mater Studiorum. She conducts her research in the fields of HRI (Human-Robot Interaction) and Philosophy of Cognitive Science. Her research project focuses on developing an embodied and enactivist account of human-robot interaction, with a particular focus on the topics of attribution of intentionality and the Uncanny Valley Effect (UVE). Her major research interests are Epistemology and Philosophy of Robotics, Philosophy of Complexity, Enactive Cognitive Science, and Social Cognition.
In this contribution, the aim is to demonstrate that by adopting an enactive conception of intentionality and, consequently, its ‘attribution’ to other agents (including robots), we can address specific issues in HRI experimental settings and gain a broader perspective to enhance our understanding of human interactions with robots. The enactive paradigm conceives intentionality not as something inherent in the inner mental mechanisms of the agents, nor as something to be inferred by an observer interacting with them, as classic approaches to intersubjective intentionality suggest. Instead, enactive intentionality in intersubjectivity is perceived in the active engagement the agents display with the environment and the other agents that inhabit it (Hutto 2012; Gallagher 2020). This reconsideration has far-reaching consequences in the field of HRI, impacting both 1) the understanding of how a human agent approaches a robot and 2) the design of robotic interfaces and architectures to effectively facilitate interaction. This shift in perspective also necessitates a reconsideration of the terminology and description of this process, prompting the transition from “intentionality attribution” to “intentionality detection”.