PAOLO DARIO is Professor of Biomedical Robotics and scientific coordinator of the research area Robot companions for citizens at the Sant'Anna School of Advanced Studies-Pisa (SSSA). He is one of the most authoritative voices of Italian robotics. He is the director of the Italian Competence Centre on Advanced and Collaborative Robotics and Artificial Intelligence ARTES 4.0.
Marianna Capasso is Research Fellow at the Institute of BioRobotics at SSSA, Italy. Her main research interests are: philosophy of technology, applied ethics, political theory. In particular, Marianna’s research is focused on Meaningful Human Control, Responsibility with AI-driven systems, Neorepublicanism, Value Sensitive Design.
Federica Merenda, is a Post-Doc Research Fellow at SSSA. She is a researcher in Political Philosophy and Human Rights working on the impact of Robotics and AI in such fields. She also serves as an Expert in the Law and Ethics of AI for the Italian Ministry for Technological Innovation and Digital Transition.
Fiorella Battaglia is Professor of Philosophy at the University of Salento and the Ludwig-Maximilians-University-Munich. She has a longstanding competence in ethics, metaethics, normative ethics, and philosophy of technology. Her outline of a normative concept of ‘human nature’ (2017) is a cornerstone in the debate about anthropological questions related to the tech-discourse.
Since robots with a high degree of adaptability and autonomy are now increasingly supporting humans both at work and in their private life, new applications are emerging, to make optimal use of robots that need to cooperate with humans in different social settings. All these factors are pushing the growth of social robotics market to new frontiers, whose ambition is to provide sophisticated and novel approaches to robots performance, social acceptability and sustainability. This workshop is designed to critically evaluate these questions from a transdisciplinary perspective, by exploring how social robots can affect and challenge trust in human social institutions and practices.
Marianna Capasso is Research Fellow at the Institute of BioRobotics at SSSA, Italy. Her main research interests are: philosophy of technology, applied ethics, political theory. In particular, Marianna’s research is focused on Meaningful Human Control, Responsibility with AI-driven systems, Neorepublicanism, Value Sensitive Design. She has authored or co-authored articles on such topics in journals such as Medicine, Health Care and Philosophy, Minds and Machines, Frontiers in Robotics and AI, and others.
Predictions indicate that robots will become part of our social environments, and they are already taking over tasks in many sectors (Gasser 2021). The development of robotic skills that are able to understand the social context, to respect social norms and recognise social norms violations are fundamental for ensuring trust in human-robot interaction practices. To date, the question of how social robots can comply with social norms is highly debated, but the current state of the art still needs considerable improvements from a transdisciplinary perspective, especially from Social sciences and Humanities disciplines (Avelino and al. 2021). One the major objectives of TERRINet was to reach out other scientific communities beyond the Robotics one, in order to foster a more meaningful approach towards innovation. Thus, my aim is to situate the question of Social Robotics in the theoretical literature on social norms. In social norm theory, a social norm can be defined as a collective practice sustained by empirical and normative expectations (Bicchieri, 2006; descriptive norms and injunctive norms in Cialdini and al. 1990). By investigating the frameworks used in social-norms theories, I identify four theoretical spaces of inquiry for social norms (nature, relation, evolution, categories of actors) (Lagros and Cislaghi 2020). Finally, I discuss how the introduction of social robots can impact on these four themes, to further advance the understanding of the societal significance of robots.
Federica Merenda is a Post-Doc Research Fellow at SSSA, where she obtained her Ph.D. cum laude. She is a researcher in Political Philosophy and Human Rights working on the impact of Robotics and AI in such fields. She also serves as an Expert in the Law and Ethics of AI for the Italian Ministry for Technological Innovation and Digital Transition. She has authored and co-authored articles in peer-review journals such as Minds and Machines, Frontiers in Robotics and AI and Ragion Pratica and she participated in edited volumes by Routledge, Edward Elgar and others.
Political Science methodologies to assess the quality of democracy identify three dimensions to evaluate the performances of contemporary democratic institutions: procedure, content and result (Diamond and Morlino 2004). To evaluate the quality of democracy with respect to its result a pivotal role is played by responsiveness, that is the ability of governants to honour the trust that citizens award them with (Dahl 1973; Almond and Verba 1979). How does the introduction of robots in public service design influence citizens’ trust in the public administration (Kok and al.2020)? In my research I look at the impact that the employment of robotic systems to substitute or to second public officials has on institutional trust (Luhmann 1990), that is the dynamic relation between an individual and the social/political system he interacts with when using public services. Case studies will be identified in the security and healthcare sectors.
Fiorella Battaglia is Professor of Philosophy at the University of Salento and the Ludwig-Maximilians-University-Munich. She has a longstanding competence in ethics, metaethics, normative ethics, and philosophy of technology. Her outline of a normative concept of ‘human nature’ (2017) is a cornerstone in the debate about anthropological questions related to the tech-discourse. She has been working extensively in action theory, roboethics, ethics of information technology and published more than 70 scientific articles on the normative, anthropocentric, and ethical aspects of techno-social systems.
Trust is an essential feature of social interaction. Without trust and mutual respect it is impossible to establish any genuine and valid cooperative relationship (Baier 1993; Jones 1996; Oshana 2014). As sharers of a common interest, as members of the same family, as colleagues, as friends, as lovers, as citizens, as chance parties in an enormous range of transactions and encounters, we cannot help but to trust others and show ourselves to be trustworthy partners towards those involved in the relationship. Trust characterizes not only relationships among humans but also those between humans and machines (Weckert 2005; Ess 2010). This point questions the possibility of trusting robots as Robot Companions and consequently our very notion of trustworthiness. Trust also entails the possibility of deception, which can hurt us in many respects. Given this potential threatening aspect of trust, it is important to assess carefully when trust is properly placed. In a similar way, it is fundamental to investigate possible forms of hindrances that could impede the development of trustworthiness (Scheutz 2009; Danaher 2020; Sætra 2021; Malle et al 2015). Both inquiries are required, since making room for some forms of deception could have some undesired impact on the foundations of human cooperation and social practices.