WORKSHOP 5 | Wednesday, August 21, 10:25 -12:05 | Workshop Room 2 (1441-210)
Leda Berio is a philosopher of cognitive science working as postdoc at the Ruhr University of Bochum in the INTERACT! project. She investigates the way our culture and language shape our interactions with others as well as the way we see ourselves. When it comes to artificial agents, she has explored which factors determine spontaneous perspective taking involving robotic avatars. Her more general aim in this line of research is to develop an account of how social norms can shape our emotional involvement in interaction with social robots and AI, through both theoretical and collaborative empirical research
Jonas Blatter is a postdoc researcher in the INTERACT! project at the Ruhr University Bochum working on theory of emotions and reactive attitudes, both in social interactions both humans and with machines. During his PhD, he worked on the social and moral norms of interpersonal emotions, specifically the fairness of emotions towards others. More recently, he has been working on emotion ascriptions to artificial agents, specifically on fictionalist theories of mental state attribution in human-machine interactions.
Social robots can take on the role of a social other in human-robot-interaction, which is similar to, or represents, the way that we interact with other people. Different disciplines and approaches conceptualize this role in a variety of ways. For example, representationalism holds that robots can be designed to display (stereo-)typically human traits that signify things like gender, race, sexuality, age, class, etc. that directly elicit scripts or produce affordances in the people interacting with them. An important aspect of this approach might see these features as representative of people of a specific group, making the robot into a representation of a member of that group. Alternatively, the sociomorphing approach [4] focuses on how social robots can be de- signed to display features of agency. This approach focuses more on how humans can interact with robots based on the abilities, skills, etc. they perceive robots to be capable of. In contrast to both of these, a fictionalist approach assumes that people interacting with humans construct a narrative in which the robot has a fictional character, which can potentially have traits and capabilities that the material robot doesn’t have or is not even designed to signify. We discuss these theoretical approaches, the conflicts that arise from them, the related ethical questions, as well as the constraints they pose on robot design.
Katie is an assistant professor at the Department for Information Technology at Uppsala University and is part of the Human Machine Interaction unit at the Division of Vi3, working at the social robotics lab. Before joining Uppsala University, she completed a digital futures postdoctoral research fellowship at the KTH Royal Institute of Technology in Stockholm, and a PhD in robotics at the Bristol Robotics Lab in the UK. Her work draws from design and computer, cognitive and the social sciences to tackle technical and societal challenges relating to human-machine interaction
A number of recent works call attention to social robot identity performance, the potential for social robots to perform socially salient (human) identity traits like race and gender, and the implications this might have for reinforcing, exacerbating or even challenging social inequities. Whilst critical reflections on this issue are necessary and important, focusing (only/primarily) on human likeness risks neglecting the way in which robots, particularly those that do not have high human likeness, may actually represent specific, individual humans. Empirical work concerning how users perceive service robots, the extent to which they (do not) reflect on the humans such robots represent (think passengers of an autonomous vehicle or teleoperator of a robot shopper) and their (lack of) willingness to treat such robots as social equals when it comes to negotiating incidental interactions indicates human responses to robots may well be driven much more by the robot’s embodiment/social identity performance than any relation to the robot’s level of autonomy or functionality in the context of serving human needs. Grappling with human representativeness – considering both the human likeness and the human representation issue represents a challenge for HRI designers. How can different approaches to “robots as people” help?
Frank Förster is a senior lecturer at the University of Hertfordshire teaching robotics and artificial intelligence. A main focus of his research is robotic language acquisition and communicative interaction between humans and machines. He has worked on the acquisition of linguistic negation and is the first person to enable a humanoid robot to learn the word `no' from linguistically unconstrained interactions with humans. More recently he has been working on multimodal conversational repair and increasing the fluidity of human-robot interaction.
For more details see: Förster, F., Broz, F., Neerincx, M. (2023) Taking a strong interactional stance. Behavioral and Brain Sciences. 46:e29. Förster, F., Saunders, J., Lehmann, H., and Nehaniv, C.L. (2019) Robots Learning to Say “No”: Prohibition and Rejective Mechanisms in Acquisition of Linguistic Negation. ACM Transactions on Hum.-Robot Interact. 8, 4, 26 pages.
The capability to engage in conversation is the hallmark of our species and possibly an important factor for a machine to be considered a social other. In contrast to the recent advances in text-based natural language processing in the form of large language models, embodied, multimodal speech-based interaction relies on rapid interpersonal processes. These processes, at times, include the mutual prediction of each partner's wants, beliefs, and knowledge - a theory of mind of sorts - as well as robust repair processes that help to create and maintain the common ground between speakers. Some of these processes appear to be automatic or semi-automatic, indicating that not everything that happens in conversations is under conscious control. While research into the exact details and technical implementation of these processes is still in its early stages, we have found indications that a human interactor's perception of a robot at least partially hinges on whether a robot can engage in these processes at the required speed. -- In this talk, I will present some examples of such interactions and the "pull" that a timely and appropriate response exerts on the human interactor. -- From a more theoretical perspective, the presented observations are broadly in support of Seibt's, Vestergaard's, and Damholdt's proposal to occasionally replace references to anthropomorphization with the notion of sociomorphing when explaining a person's apparent attribution of social qualities to a robot. Where anthropomorphization requires conscious and slow high-level cognitive processes, sociomorphing as a variant of direct perception may be automatic, fast, and sub- or preconscious.
Kerstin Fischer is professor for Language and Technology Interaction at the University of Southern Denmark and director of the Human-Robot Interaction Lab in Sonderborg. Her research focuses on the interaction between humans and robots, her most important concern being what mechanisms and processes are involved. She uses a broad range of qualitative and quantitative methodologies to study people's behavior, bringing her background in linguistics, communication and multimodal interaction analysis to the study of behavior change, persuasive technology and human-robot interaction.
At the heart of robophilosophy and human-robot interaction research lies the question how robots come to be perceived as social actors, or ‘like people’. The depiction model (Clark & Fischer 2023) takes a constructivist approach by assuming that people understand robots as social beings so easily because they rely on the mechanism of depiction we are all familiar with from a very early age on. The approach therefore addresses the effects of staging the robot, a certain willingness to enter into playful pretense and the fast processes involved in decoding depictions, among many other observations. In this constructive process, people actively engage in the creation of a non-standard, fictional character; nevertheless, their experiences with and emotions towards these characters are real – in line with the research on how people respond to fiction in general. By taking a constructivist stance, which is in line with people dropping in and out of treating robots as social beings, as well as with interpersonal differences, we assume that people are not victims of some kind of confusion or deception, but rather agents who engage in a joint pretense to a greater or lesser extent. Moreover, the model explains how people know how to interact with a specific robot, relying on constructional coordination processes that are common to human interaction (like recipient designing their behaviors and building and drawing on common ground). See also: Clark, H. H., & Fischer, K. (2023). Social robots as depictions of social agents. Behavioral and Brain Sciences, 46, e21.