WORKSHOP 6 | Wednesday, August 21, 13:35 – 16:35 | Workshop Room 1 (1441-110)
Pekka Mäkelä has a doctorate in philosophy (University of Helsinki). He is the vice director of Helsinki Institute for Social Sciences and Humanities (HSSH). Mäkelä is jointly with Hakli a PI of RADAR (Robophilosophy, AI Ethics and Datafication Research) group. His research interests are in normative dimensions of collective action, social ontology, the philosophy of the social sciences, and philosophical problems of social robotics and human-robot interaction. Presently he is the PI of three research projects on such themes as Ethical Risks and Responsibility of Robotics Group Processes in Human-Robot Ensembles with Social Robots, and Trust and Value-Sensitive Design.
Raul Hakli is a university researcher in practical philosophy at the University of Helsinki. Together with Mäkelä he leads the RADAR research group. His research areas include philosophy of social robotics and artificial intelligence, social ontology, epistemology, and action theory. Hakli was a co-organizer of the first Robophilosophy conference in 2014 in Aarhus, Denmark, and together with Pekka Mäkelä he organized the previous Robophilosophy conference in Helsinki, Finland in 2022. Currently, he is the PI of a research project "Towards Responsible AI" funded by Kone Foundation. He has co-edited volumes on the philosophy of social robotics, and he is the editor-in-chief of Springer series Studies in the Philosophy of Sociality.
The workshop discusses social robots participating in hybrid groups consisting of both human beings and social robots. In social robotics and HRI, the dominant paradigm has been to study interactions between one human being and one robot. Studying robots in group contexts raises new, interesting, and complementary questions and perspectives to both HRI research and robophilosophy. The talks in this workshop will deal with conceptual and normative issues stemming from robots occupying roles in social groups, including questions related to trust, responsibility, and joint action.
Pekka Mäkelä has a doctorate in philosophy (University of Helsinki). He is the vice director of Helsinki Institute for Social Sciences and Humanities (HSSH). Mäkelä is jointly with Hakli a PI of RADAR (Robophilosophy, AI Ethics and Datafication Research) group. His research interests are in normative dimensions of collective action, social ontology, the philosophy of the social sciences, and philosophical problems of social robotics and human-robot interaction. Presently he is the PI of three research projects on such themes as Ethical Risks and Responsibility of Robotics Group Processes in Human-Robot Ensembles with Social Robots, and Trust and Value-Sensitive Design.
Raul Hakli is a university researcher in practical philosophy at the University of Helsinki. Together with Mäkelä he leads the RADAR research group. His research areas include philosophy of social robotics and artificial intelligence, social ontology, epistemology, and action theory. Hakli was a co-organizer of the first Robophilosophy conference in 2014 in Aarhus, Denmark, and together with Pekka Mäkelä he organized the previous Robophilosophy conference in Helsinki, Finland in 2022. Currently, he is the PI of a research project "Towards Responsible AI" funded by Kone Foundation. He has co-edited volumes on the philosophy of social robotics, and he is the editor-in-chief of Springer series Studies in the Philosophy of Sociality.
In this talk we will focus on conceptual features of coordinated group actions. We argue that trust between the participants of group action is a presupposition of smooth coordination. We analyze what kind of practical reasoning patterns smooth and successful coordination requires on the part of team members. Successful practical reasoning builds on rather complicated loops of mutual beliefs concerning the other participants’ beliefs concerning other participants and beliefs concerning other participants’ individual intention of participation and their motivating reasons. Having provided some conceptual preliminaries, we move on to scrutinizing the possibility of having robots as full-blown participants in a we-mode group action. We study whether robots could participate in joint action via team reasoning, and whether there is conceptual space for robots to figure as trustworthy cooperators in light of various philosophical trust accounts. We also discuss different notions of joint action and group action and locate robots into the scale of such notions of varying strength. In addition to studying trust between the participants, we analyze what it could mean for outsiders to trust collective agents consisting of both humans and robots.
Kamil Mamak is a philosopher and a lawyer. He is a postdoctoral researcher at the RADAR group at the University of Helsinki and an assistant professor at the Department of Criminal Law at the Jagiellonian University. He has authored 3 book monographs and more than 30 peer-reviewed journal articles and contributed chapters. He received a research grant from the National Science Center in Poland.
The issue of ascribing responsibility for the harmful actions of robots/AI systems is one of the most important problems related to deploying robots/AI systems. There are many propositions to deal with the responsibility gap issues. One of them is to share the responsibility by humans and AI/robots. This idea is doubtful, as Hakli and Mäkelä argue that the system of entities could not be responsible if every element of the system could not be a moral agent. The only entities responsible for the outcomes of collaborations are humans. Acceptance of this position means that, at least for now, the harmful outcomes from the results of teams of humans and robots would be attributed to humans. But to what extent would it be acceptable if robots were team members who initiated or strengthened wrongful decisions? Research suggests that humans cooperating with robots could have a problem with accurately evaluating the situation that might lead to harmful outcomes. In this paper, I focus on the ”team mode” as an amplifier of the blurring responsibility.
Anna Strasser , PhD, is the founder of the DenkWerkstatt Berlin and works as an independent, freelance philosopher. She held postdoctoral positions in Freiburg (Center for Cognitive Science) & Berlin (Berlin School of Mind and Brain), as well as visiting fellowships at Tufts University (with Daniel Dennett) and at the UC Riverside (with Eric Schwitzgebel). Since autumn 2020, she has been an associate researcher of the Cognition, Values, Behaviour (CVBE) research group at LMU-Munich led by Ophelia Deroy. Already in her dissertation ‘Cognition of artificial systems’, she occupied herself with questions concerning the agency of artificial systems (see www.denkwerkstatt.berlin).
Many studies in HRI have shown that humans do not only attribute agency but also social skills to robots. In view of the recent progress in generative AI, there are increasing voices from the philosophy of AI that even in the two-dimensional space (i.e., in interaction with chatbots) social attributions take place and can also be philosophically justified. I assume that the application of generative AI in social robotics will give rise to many new studies. Without anticipating their results, I would like to put forward possible hypotheses for discussion by relying on insights I gained concerning interactions with LLMs. In dealing with the question of what we actually do when we interact with LLMs, I argue that human-machine-interactions with LLMs cannot be reduced to pure tool use and present a suggestion of a conceptual framework that can cover interesting in-between phenomena, such as quasi-social interactions and asymmetric joint actions. To this end, I am interested in the to-be-expected implications of the experience that our sociality gains traction within communicative exchanges in HRI. Will studies in which robots ‘had’ increased linguistic abilities as they were operated by humans without the knowledge of the participants just be replicated? Or does it make a difference if the robot actually has new abilities in speech production? And what changes in the well-researched attribution practices could emerge if one were to examine not only one-to-one interactions but interactions of mixed groups? What effects does it have when participants in a study pursue a common goal simultaneously with other humans and robots? Can the conceptual proposal of asymmetric joint actions, which distinguishes between junior and senior partners in a joint action, be useful for forming hypotheses?
Tuomas Vesterinen is a philosopher of science specialized in psychiatry and ethics of artificial intelligence with additional interests in philosophy of mind and anthropology. He is a postdoctoral researcher at Stanford University (the Scandinavian Consortium for Organizational Research and anthropology) and a member of at RADAR: Robophilosophy, AI Ethics and Datafication Research group at the University of Helsinki. His interdisciplinary research focuses on the ethical, conceptual and social consequences that arise when employing artificial intelligence in psychiatry and mental healthcare. His dissertation in philosophy “Socializing Psychiatric Kinds” (University of Helsinki, 2023) is on the role of social factors and non-epistemic values in the classification and explanation of psychiatric disorders.
Robots are increasingly occupying demanding social roles in mental healthcare. These roles have primarily been studied in the context of individual level robot-human interactions. In this talk, I examine instead the consequences of conceptualizing robots as being able to take over social roles in mental healthcare organizations. I delve into how interactions with social robots—like avatar robots and socially assistive robots—can reshape the dynamics and obligations between clinicians and patients in healthcare groups, and I explore the broader societal and ethical implications of these changes. If robots are considered to be able to play social roles usually occupied by humans, they will inherit some of the causal powers assigned to those roles. I argue that those causal roles, nonetheless, cannot be complete. First, I follow Hakli and Mäkelä’s argument that AI systems lack moral agency, and examine how it affects organizational dynamics in healthcare. The lack of full agency of robots raises questions over responsibility gaps not only on the individual level interactions, but also on the organizational level. Second, I contend that robots cannot fully assume social roles due to their inability to internalize these roles. This suggests that interactions between robots and humans are inherently emotionally and psychologically one-sided, prompting questions about the roles and interactions of patients and clinicians within roboticized organizations.
Olli Niinivaara is a doctoral researcher in computer science, currently positioned at the RADAR group at the University of Helsinki. He has been working in R&D positions both in academia and industry. He is interested in the interplay between algorithms and groups, including such topics as multi-agent systems, group decision support systems, group recommender systems, computational social choice, human-robot interaction and the ethics of AI.
Social robots are robots enriched with social intelligence. Some people find socializing with them somewhat creepy, and tend to prefer real people. At the same time, many people would like to be socially more intelligent, either for personal reasons or due to the requirements of their daily job. This situation hints toward a new prospect for the social robotics industry: combine the best parts of a social robot and a human being into a social cyborg. The end result would escape the creepiness of the artificial chassis, and would augment the social intelligence of its host organism. The purpose of my workshop talk is to introduce social cyborgs as an object of inquiry. I will give a rough sketch of a cyborg architecture, including the social context that such a cyborg will induce. I will compare the social cyborg to similar technological concepts, such as social robots, AI-mediated communication, cognitive AI extenders, and artificial moral advisors. I will discuss a sample of application areas where social cyborgs seem most useful, such as enhancement of strategic communication in group negotiation settings. I will also argue that the concept of social cyborgs gives a new twist to the ethics of manipulation. Lastly, I will discuss individual risks that social cyborgs may introduce. I bring forth risks for those who decide to become cyborgs, as well as for those who cannot afford or refuse to extend themselves with artificial social intelligence devices.
Tomi Kokkonen is a philosopher of science and technology, specialized in biological approaches to human sociality and morality, and issues related to sociality and morality in robotics and AI. He defended his PhD in 2021 and is currently working as a postdoctoral researcher at the RADAR group at the University of Helsinki.
When robots function as members of a group in a group action context, ethical worries emerge. When a robot’s function is simple, the worries can be met through value-sensitive design (VSD). In more complex and variable contexts, with robots with higher degree of autonomy and flexibility in their behavior, the problem of moral decision-making emerges. There are, however, social contexts in between simple functions and ones that require artificial morality. I have previously argued that proto-moral capacities (simpler than what are needed for morality and evolutionarily preceding true morality) can make robots’ behavior more in line with our ethical considerations. The function of these capacities is best understood as enabling participation in certain forms of social interaction. In this paper, I will argue that a way to minimize ethical risks when autonomous robots are deployed in complex social contexts, is to combine the proto-morality approach and VSD, with a focus on the forms of social interaction within the group’s functioning.