Future Responsibilities

 SESSION 14 | Friday, August 23, 9:00 – 10:40 | Auditorium 3 (1441-113)


Friday August 23, 9:00-9:30 CEST, Auditorium 2 (1441-112)

Marc Champagne, Kwantlen Polytechnic University, Canada

Marc Champagne is a Regular Faculty Member in the Department of Philosophy at Kwantlen Polytechnic University in Canada. His research explores the metaphysics, semiotics, and ethics of technological artificiality. He has argued (with Ryan Tonkens) that, even in cases where robotic autonomy severs any causal link, companies should publicly accept real consequences for releasing unsafe AI. Before coming to KPU, he was a Visiting Assistant Professor at Trent University. He has a PhD in Philosophy from York University, a PhD in Semiotics from the University of Quebec in Montreal, and did his Post-Doctoral work at the University of Helsinki

“Responsibility” Plus “Gap” Equals “Problem”

Peter Königs recently argued that, while autonomous robots generate responsibility gaps, such gaps need not be considered problematic. I argue that Königs’ compromise dissolves under analysis since, on a proper understanding of what “responsibility” is and what “gap” (metaphorically) means, their joint endorsement must repel an attitude of indifference. So, just as “calamities that happen but don’t bother anyone” makes no sense, the idea of “responsibility gaps that exist but leave citizens and ethicists unmoved” makes no sense. 


Friday August 23, 9:35-10:05 CEST, Auditorium 2 (1441-112)

Arzu Formanek, University of Vienna, Austria

Arzu Formanek is a doctoral researcher at the University of Vienna, writing under the supervision of Mark coeckelbergh, Mark Bickhard, and Sven Nyholm. She is also associated with the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA). Her research areas: cover a wide range: Philosophy of Technology; HRI; Cognitive Scientific Approaches to Robot Ethics and Robot Moral Patiency; Robophilosophy; Interactivism; Philosophy of Cognitive Science; Intelligence; Automation and Industrial Design. 

Versatile interactions with robotic affordances: using the OASIS framework to bring differentiation in the indirect moral patiency debate

“The novel capacities of multimodal generative AI suddenly bring us much closer to realizing the longstanding vision of ubiquitous social robotics” says the opening line of Robophilosophy 2024. However, the normative conceptual space for how we evaluate human treatment of robots is not quite ready for such extension. The majority of discussions are still motivated by mistreatment of robots (like kicking a robot dog) and anthropomorphism, thus resulting in worries that mistreating robots might have undesirable implications or consequences for human moral practices. These approaches thus fall short of accounting for (i) novel and versatile “affordance mixtures” that robots can offer, especially when they’re equipped with specific AI systems; (ii) novel, versatile and dynamic interaction opportunities arising from (i); (iii) that robots can and shall be used in many different areas and ways as products thanks to (i) and (ii), while being part of the social realm as sociable beings. What we need to account for this versatility and novelty is a conceptual framework that can allow us to differentiate an affordance treatment of a robot from a mistreatment, a usage from an abuse. I show that the OASIS framework provides this differentiation.


Friday August 23, 10:10-10:40 CEST, Auditorium 2 (1441-112)

Sandrine Rose Schiller Hansen, University of Copenhagen

Sandrine Rose Schiller Hansen is a Postdoc at the Centre for Philosophy of AI at the University of Copenhagen working on questions related to manipulation, intimacy, and psychosocial dimensions technology. She earned her PhD in Philosophy at KU Leuven, Belgium.

Anders Søgaard, University of Copenhagen

Anders Søgaard is Professor in Natural Language Processing and Machine Learning at the University of Copenhagen. Jointly affiliated with the Dpt. of Computer Science, the Dpt. of Philosophy, the Pioneer Centre for Artificial Intelligence, and the Center for Social Data Science. Previously at University of Potsdam, Amazon Core Machine Learning, and Google Research. Father of three and a published poet. 

Captivation Lures and Social Robots

Social robots, including chatbots and virtual agents, can be fine-tuned on engagement feedback from end users, and their behavior optimized to keep users on the platforms. We refer to strategies induced by social robots in this way as captivation lures, to reflect their manipulative and trap-like nature. We argue that captivation lures differ from phenomena previously studied, including nudging, hypernudging, non-argumentative influence, and subliminal techniques. Researchers have studied nudging - say, in commercials - and hypernudging - say, in recommender systems - for decades, but with recent developments in generative artificial intelligence and social robotics, new ways of manipulating end users have become possible. Captivation lures can be induced from human feedback, and unlike traditional nudging and hypernudging they are not limited to a repository of human-produced content and known nudging strategies. The fact that social robots can induce new, unseen, and hard-to-detect strategies at scale, in the absence of such intentions, has important philosophical and moral implications.