SESSION 4 | Wednesday, August 21, 13:35 – 14:05 | Auditorium 2 (1441-112)
I am a PhD student at the Safe and Trusted AI Centre of Doctoral Training (King’s College and Imperial College, London UK). My PhD thesis is on detecting deception and manipulation in language models. I am interested on the psychological impact of technology within the political context of surveillance capitalism, and ways to resist it.
Lecturer in Robotics and Autonomous Systems and lead of the Responsible Robotics and AI Lab at King’s College London.
Main research areas are Fairness, Accountability and Transparency in Robotics.
In this paper we show that with the increasing integration of social robots into daily life, concerns arise regarding their impact on the potential for creating emotional dependency. Using findings from the literature in Human-Robot Interaction, Human-Computer Interaction, Internet studies and Political Economics, we argue that current design and governance paradigms incentivize the creation of emotionally dependent relationships between humans and robots. To counteract this, we introduce Interaction Minimalism, a design philosophy that aims to minimize unnecessary interactions between humans and robots, and instead promote human-human relationships, hereby mitigating the risk of emotional dependency. By focusing on functionality without fostering dependency, this approach encourages autonomy, enhances human-human interactions, and advocates for minimal data extraction. Through hypothetical design examples, we demonstrate the viability of Interaction Minimalism in promoting healthier human-robot relationships. Our discussion extends to the implications of this design philosophy for future robot development, emphasizing the need for a shift towards more ethical practices that prioritize human well-being and privacy.
Leigh Levinson is a dual PhD student at Indiana University Bloomington’s Luddy School of Informatics and the Cognitive Science program. Her research, conducted through the R-House Human-Robot Interaction lab, focuses on children’s rights to privacy with social robots and other embodied technologies. Her work aims to explore the context-dependent and dynamic nature of child-robot interactions.
Eli McGraw is a dual PhD student in Indiana University Bloomington’s Luddy School of Informatics and the Cognitive Science program. Specializing in non-human consciousness, enactive social cognition, and robotics, Eli conducts most of his research through the R-House Human-Robot Interaction and Animal Informatics labs. He is interested in bridging gaps between animal cognition, artificial intelligence, embodiment, and human sociality. Eli is also an active member of the experimental humanities lab, where he explores inner speech and phenomenology.
Randy Gomez is a senior researcher at Honda Research Institute Japan. His research focuses on the applications of social robots in children’s spaces, including schools, hospitals, and homes.
Selma Šabanović is a Professor of Informatics and Cognitive Science at Indiana University, Bloomington. She founded and directs the R-House Laboratory for Human-Robot Interaction research at IUB. Her work combines the social studies of computing, focusing particularly on the design, use, and consequences of socially interactive and assistive robots in different social and cultural contexts, with research on human-robot interaction (HRI) and social robot design.
In this paper we show that with the increasing integration of social robots into daily life, concerns arise regarding their impact on the potential for creating emotional dependency. Using findings from the literature in Human-Robot Interaction, Human-Computer Interaction, Internet studies and Political Economics, we argue that current design and governance paradigms incentivize the creation of emotionally dependent relationships between humans and robots. To counteract this, we introduce Interaction Minimalism, a design philosophy that aims to minimize unnecessary interactions between humans and robots, and instead promote human-human relationships, hereby mitigating the risk of emotional dependency. By focusing on functionality without fostering dependency, this approach encourages autonomy, enhances human-human interactions, and advocates for minimal data extraction. Through hypothetical design examples, we demonstrate the viability of Interaction Minimalism in promoting healthier human-robot relationships. Our discussion extends to the implications of this design philosophy for future robot development, emphasizing the need for a shift towards more ethical practices that prioritize human well-being and privacy.
Robin Gigandet is a PhD student at the Université de Lille (France), focusing on human-robot interaction. He holds a Master's degree in Cognitive Science, specializing in Cognitive Engineering, Interaction, and AI. His research explores human perception and primary social responses to artificial social agents, aiming to contribute to the discussion of the social and ethical implications of introducing social robots into human environments. He recently published a paper in the MDPI journal ‘Robotics’ about N400 brain response and human skepticism towards robotic emotions.
Tatjana A. Nazir is a cognitive neuroscientist and research director at the National Center for Scientific Research (CNRS) in France. Her research explores how human cognition is influenced by the physical body (embodied cognition). In social robotics, she studies human interaction with artificial social agents, focusing on human expectations and the potential for rejection due to design flaws. Her work emphasizes the impact of robot design on natural human reactions, highlighting the importance of understanding and avoiding negative behavioral triggers. She aims to inform robot designs that promote positive human-robot interactions.
Boston Dynamics recently used OpenAI’s GPT API and other open-source large language models to enable speech in its robotic dog. While this represents a remarkable technological advancement, it has also elicited feelings of unease among many observers: Something does not look right in these video clips. We propose a methodological approach using the N400 component of event-related potentials (ERPs), a brain marker for processing incongruity, to systematically evaluate human perception of robots. The N400 paradigm allows estimating the extent to which a robot’s utterances align with human expectations. We demonstrate this methodology through two pilot experiments where participants' N400 responses to an armless/legless robot discussing topics that aligned or conflicted with its physical capabilities (e.g. shaking hands) or robotic condition (e.g. having feelings) were measured. Results show sentences incongruent with the robot's design or emotional capabilities elicit a significant N400 amplitude increase, highlighting a mismatch between expectation and experience. By applying this method, researchers and developers might gain deeper insights into human perceptions of robots, potentially guiding designs better aligned with societal norms and expectations.