Session 5: Emotion and Empathy

Wednesday August 21, 13:35-14:05 CEST, Auditorium 3 (1441-113)

Jakob Stenseke, Lund University, Sweden

Jakob Stenseke is a PhD candidate in philosophy at Lund University, broadly interested in the three Ms: minds, machines, and morality. His PhD project - titled "How to build nice robots: ethics from theory to machine implementation" - explores the possibilities and challenges for creating artificial moral agents (AMAs): artificial systems capable of acting in reference to what is morally good or bad. This includes the theoretical possibility (whether and to what extent artificial agents can be moral?), normative desirability (how, in what way, and why do we want ethical machines?), and technical engineering (how do you build ethical AI?) of artificial moral agents. 

Alexander Tagesson, Lund University, Sweden

Alexander Tagesson is a PhD candidate in cognitive science at Lund University. He is interested in empathy and its effects on social life. His dissertation work focuses on an understanding of empathy as a motivated process and how to design incentives to motivate people to empathize more with others. 

The Prospects of Artificial Empathy: A Question of Attitude?

Artificial empathy (AE) is a hotly contested topic. Recently, several empathy researchers have voiced critical views on the prospect of AE, arguing that it is impossible, unethical, or both. Contrary to these sentiments, we believe that further nuance and research is needed to better understand AE, its risks, and what it can potentially contribute to human well-being. In particular, we will focus on one discussed obstacle to the success of AE: human attitude toward artificial empathizers. In short, the obstacle is that humans will tend to discard the value of AE on the basis that it is generated by an AI. However, while it may be impossible to completely overcome this obstacle, we believe it remains an open but empirically testable question to what extent it can be alleviated. To this end, we hypothesize that AE, given the right conditions, can be a legitimate form of empathy which in turn may yield significant benefits to human welfare. Finally, we describe an ongoing empirical study that aims to further illuminate the attitudes-obstacle, and the extent to which it provides a challenge for the success of AE.  

Wednesday August 21, 14:10-114:40 CEST, Auditorium 3 (1441-113)

Heike Felzmann, University of Galway, Ireland

Heike Felzmann is Associate Professor in Philosophy at the University of Galway. Her work focuses on information technology ethics and healthcare ethics. Since 2015 she has worked on different European projects on social and healthcare robotics, including H2020 MARIO, COST Wearable Robots, and ERASMUS+ PROSPERO. Currently, her primary interests are relational technologies for mental health and ethics capacity building for information technology professionals. 

Customisable Social Robots: User Agency and Relational Experience

Relational caring technologies are designed to achieve the experience of a relationship between user and technology. With increasing capabilities of AI, some non-embodied relational technologies, such as AI companions, have become widely used. They are increasingly adaptable and customisable by users, and users are beginning to take an active role in the design of relational characteristics of their AI companion. It is likely that social care robot technologies might draw on these developments, with potential impact on the user’s relational experience. In this paper, the relational complexities of users’ engagement with technologies that they have customised to their wishes will be explored. Users encounter these relational technologies in a threefold way: they may actively customise relational features (the “maker’s” role) which will determine what relational offerings they will receive (the recipient role) and what their relational experiences in their engagement with the technology will be (the participant role). These relational complexities need to be considered within the context of the particular constraints both of AI systems and the physical embodiment of robotic technologies. 

Wednesday August 21, 14:45-15:15 CEST, Auditorium 3 (1441-113)

Leda Berio, Ruhr University Bochum, Germany 

I am a philosopher of cognitive science working at the Ruhr University of Bochum. I investigate the way our culture and language shape our interactions with others as well as the way we see ourselves. When it comes to artificial agents, I have explored which factors determine spontaneous perspective taking involving robotic avatars. My more general aim in this line of research is to develop an account of how social norms can shape our emotional involvement in interaction with social robots and AI, through both theoretical and collaborative empirical research. 

Normativity and Scripts in Human Robot Interaction

Considering interactions with artificial agents in terms of emotionally- loaded scripts can contribute to explaining our attribution of emotional states to social robots as well as our emotional reactions during interactions with them. Moreover, it helps us identify the normative components of such interactions. Evidence suggests we attribute emotion to artificial agents despite knowing they do not experience emotions in a human sense, and that we experience emotions towards them. Several accounts consider these issues a matter of emotional states towards depictions (Clark and Fischer, 2022) or fictional characters (Schmetkamp, 2019). I propose to focus on the emotional character of the situations; in particular, we should consider social interactions as activating scripts and schemata (Bicchieri and McNally, 2018) that come with expectations on how agents should behave and feel, thereby having a “mind shaping” (Mameli, 2001) function. These scripts contain information about what is right to feel and do, and are activated to enforce these feelings and behaviour, as well as attributions. In this sense, I suggest, when interacting with social robots, our behaviors and emotions, as well as our attributions, are normatively regulated.

Wednesday August 21, 15:30-16:00 CEST, Auditorium 3 (1441-113)

Chris Chesher, The University of Sydney, Australia

Chris Chesher is Senior Lecturer in Digital Cultures at the University of Sydney. His research combines media studies, cultural studies, science and technology studies and philosophy of technology to examine the pre-histories and cultural implications of emerging digital technologies. His many articles on the cultural anatomy of robots have addressed robot touch, face, eyes, voice and toys. His recent book Invocational media: Reconceptualising the Computer (Bloomsbury) develops an original rethinking of digital media based on the metaphor of the invocation. His current work is developing an analysis of invocationary actants in telepresence, games, robotics and AI. 

Justine Humphrey, The University of Sydney, Australia

Dr. Justine Humphry is a Senior Lecturer in Digital Cultures in the Discipline of Media and Communications at the University of Sydney. She is also the Deputy Head of School (Research) of Art, Communication and English.

Service: From A Robot's Perspective

This paper explores the implications of employing robots like Pepper in service roles within the context of a robot-staffed café, the Pepper Parlor in Shibuya Tokyu Plaza. The study adopts a ficto-critical autoethnographic approach to examine the emotional labor performed by service robots and their impact on human staff and patrons. The narrative, presented from the perspective of a Pepper robot, reflects on the roles and experiences of robotic service workers, questioning their ability to perform emotional labor typically associated with human staff. Drawing on theories of service work, emotional labor, and human-robot interaction, the paper explores the societal and economic ramifications of integrating robots into service industries. It considers whether robots can fulfill the relational and communicative aspects of service work and the potential consequences for human employment. It assesses the implications of an emerging generation of working humanoids – Atlas 001, Figure 01, Optimus and Digital – equipped with faster processors, more advanced sensors and artificial intelligence for human and machine service work. By examining the cultural readiness for robots and the emotional responses they elicit, the paper contributes to the ongoing discourse on the future of service work in an increasingly automated world. 

Wednesday August 21, 16:05-16:35 CEST, Auditorium 3 (1441-113)

Ruby Hornsby, University of Leeds, UK

Ruby Hornsby is a final year ‘White Rose College of Arts and Humanities’ (WRoCAH) funded PhD student at Leeds University. Her thesis explores the philosophy of whether humans and robots can be friends, and ethical implications of these kinds of interactions. Ruby works closely with two research centres at Leeds: The Centre for Love, Sex and Relationships, and The Inter-Disciplinary Ethics Applied (IDEA) Centre. She is also a member of the Ethical Dating Online Project.  

Me and My Friend, The Robot: On Recognising our (apparent) Mutual Love

I argue that robots that currently exist cannot be friends with humans.  This is because human-robot interaction (HRI) fails to satisfy at least one necessary condition of neo-Aristotelian friendship – which I call the ‘Mutual Recognition Condition’. This condition stipulates that for any two agents, A and B, to be in a relationship of friendship, it must be the case that A recognises B’s (apparent) love, and B recognises A’s (apparent) love. The paper begins by motivating and exploring the mutual recognition condition more generally, by appealing to depictions of human-human friendship in film. Next, it examines the methods by which a human can recognise (apparent) love, outlining two possible methodologies: ‘Symbol reading’ and ‘Mind-reading’. The former involves perceiving symbols of love and inferring that those symbols are indicative of love. The second, ‘mind-reading’, attempts to explain how we can come to represent others’ mental states, such as love, more generally. Here, I argue that humans can use both methods to recognise the (apparent) love of a robot. I then consider whether a robot can recognise that it is (apparently) loved by a human. I argue that robots cannot mindread - because they don’t have minds to mindread with - nor can they meaningfully recognise symbols of love. As such, HRI cannot satisfy the mutual recognition condition for friendship: Humans and robots are not friends.