Trust in Artificial Agents

 SESSION 8 | Thursday, August 22, 10:25 – 12:05 | Auditorium 3 (1441-113)


Thursday August 22, 10:25-10:55 CEST, Auditorium 3 (1441-113)

Mathilde Undrum Smidt, University of Copenhagen, Denmark

Mathilde Undrum Smidt is a Communication and IT Master’s student at the University of Copenhagen. She recently finished her MA thesis on the subject of predicting the perceived trustworthiness of large language models. Mathilde works as a digital business consultant at Valcon. Main research areas are artificial intelligence, large language models, trustworthiness, and cognitive bias. 

Olivia Figge Anegaard, University of Copenhagen, Denmark

Olivia Figge Anegaard is a Communication and IT Master’s student at the University of Copenhagen and currently works at the company Templafy. She recently finished her Master’s thesis on the perceived trustworthiness of large language models from a user perspective, explored through a mixed methods approach. Her main research areas are artificial intelligence, large language models, trustworthiness, and cognitive biases. 

Anders Søgaard, University of Copenhagen, Denmark

Anders Søgaard is a full professor of computer science at the University of Copenhagen. He is an ERC Starting Grant, Google Focused Research Award, and Carlsberg Semper Ardens Advance recipient, and runs the Center for Philosophy of Artificial Intelligence. 

How Good Are We at Assessing the Trustworthiness of LLMs?

What is predictive of people’s trust in instruction-tuned LLMs such as ChatGPT-3.5 or LLaMA-2? Chain-of-thought prompting has been proposed as a technique that would increase trust. We find, somewhat surprisingly, that while people prefer chain-of-thought explanations, such explanations increase trust when they are not read, but decrease trust when they are read. Moreover, the question type is also influential when assessing an LLM’s trustworthiness. In total, 13% of the variance in trustworthiness judgments can be attributed to factors that are independent of the model response. In sum, people’s trust in instruction-tuned LLMs seem to be affected by factors that do not pertain the quality of the output. 


Thursday August 22, 11:00-11:30 CEST, Auditorium 3 (1441-113)

Leonardo Espinosa-Leal, Arcada University of Applied Sciences, Finland.

Have a very broad range of research interest comprehending, but not limited to, areas such as applied artificial intelligence, autonomous intelligent machines, mining of big datasets, clustering of time series, causality, quantum machine learning, extreme learning machines, creative machines, philosophy of artificial intelligence, computer vision, deep reinforcement learning among many others.  

On the meaning of trust, reasons of fear and the metaphors of AI: Ideology, Ethics, and Fear

Metaphoric representations of AI and its ‘trustworthiness’ participate in a process of humanization of technology and dehumanization of humanity. This process has ideological connotations compatible with the Neoliberal project of enforcing a social order based on instrumental rationality and the survival of the capital through the logic of the ‘self-regulating market’. These metaphors and the process they are part of generate fears that the debate about ‘trustworthy AI’ seeks to address. These efforts, however, are doomed to fail because the mainstream debate about the ethics of AI neglects the ideological dimension. In this paper we address these concerns, this neglect and suggest some practical steps to oppose the ideological appropriation of AI and its ethics. 


Thursday August 22, 11:35-12:05 CEST, Auditorium 3 (1441-113)

Pericle Salvini, University of Oxford, United Kingdom

Pericle’s educational background is in the humanities (literature and theatre studies). However, he has been working for several years with roboticists. His research interests gravitate around HRI, RRI, roboethics, education and robotic art. Pericle is currently Research Consultant at the Responsible Technology Institute of the University of Oxford, where is conducting research on responsible robotics. In particular, on data recorders for robots and robot accident investigations.  

Marina Jirotka, University of Oxford, United Kingdom

Marina Jirotka is Professor of Human Centred Computing in the Department of Computer Science at the University of Oxford and Governing Body Fellow of St Cross College. She leads an interdisciplinary research group in that combines both social and computer science approaches to the design of technology. Marina is an EPSRC Established Career Fellow conducting a five-year investigation into Developing Responsible Robotics for the Digital Economy. She is Director of the Responsible Technology Institute at Oxford, and she is co-director of the Observatory for Responsible Research and Innovation in ICT Ltd; she is also Editor in Chief of the Journal of Responsible Technology. She has published widely in international journals and conferences including Human Computer Interaction, Computer Supported Cooperative Work and Responsible Innovation.

Lars Kunze, University of the West of England, United Kingdom

Lars Kunze is a Full Professor in Safety for Robotics and Autonomous Systems at the Bristol Robotics Laboratory at UWE Bristol. 

Prior to this, he was a Departmental Lecturer in Robotics in the Oxford Robotics Institute (ORI) and the Department of Engineering Science at the University of Oxford (where he is now a Visiting Fellow). In the ORI, he leads the Cognitive Robotics Group (CRG). 

He is also the Technical Lead at the Responsible Technology Institute (RTI), an international centre of excellence focused on responsible technology at Oxford University; and a Programme Fellow of the Assuring Autonomy International Programme (AAIP) at York University.

Jo-Ann Pattinson, University of Leeds, United Kingdom

Research and Impact Fellow at the Institute for Transport Studies specialising in the regulation of new technology and transport policy. Research focuses upon the impact law, policy and technology has upon people and society.  Experienced litigation and dispute resolution solicitor entitled to practise law in England and Wales, formerly working in private practice acting in litigation disputes.  My career began as a barrister and solicitor in the Northern Territory of Australia.

Alan Winfield, University of the West of England, United Kingdom

Alan Winfield is a Professor emeritus for Robot Ethics. His work at UWE spans Research and Public Engagement. He conducts research in Cognitive Robotics within the Bristol Robotics Lab. He is a member of the Science Communication Unit, and undertakes public engagement work centred upon robotics. Robot ethics is a significant focus of his current work, including the development of new standards. 

On Robots with Reasoning Capabilities and Human-like Appearance and Behaviour: Implications for Accident Investigations

On the one hand, AI-enabled reasoning allows robots to create detailed accounts of their own situated behaviour as well as the behaviour of other people. This capability is currently employed to achieve transparency, trust, and enhance robot social and communicative capabilities. On the other hand, robots may assume a human appearance, thereby enabling them to express and convey emotions and gestures in a manner akin to that of humans. This approach is designed to facilitate more effective interactions with people. This article examines the ethical, social and legal implications of these capabilities for the investigation of robot accidents. In particular, in this study we examine two cases: a) robots capable of giving testimony about an incident in which they are directly or indirectly involved; b) android robots as subjects of human witnessing.