Responsible Design

 SESSION 11 | Thursday, August 22, 13:35 – 15:15 | Auditorium 3 (1441-113)


Thursday August 22, 13:35-14:05 CEST, Auditorium 3 (1441-113)

Lena Fiedler, Technische Universität Berlin, Germany

In 2021 Lena Fiedler gained her master’s degree in philosophy with honors.  After doing a year of interdisciplinary work at a trauma institute in Frankfurt, she is now research assistant at TU Berlin in a project about human-robot-interaction in public spaces. In her doctorate, she investigates the gendering of robots based on stereotypical design. Her research is based on feminist philosophy of science and technology as well as feminist epistemology.  

Ethical Implications of Stereotyping Robots in Public Spaces

Anthropomorphic cues can be useful in the design of social robots to ensure an intuitive human-robot interaction. However, anthropomorphic design is not just human-like in general, but often a stereotypical portrayal of a specific social group. Using the example of a fictional case study, namely a wheelchair robot representing disabled persons in public spaces, this paper examines the ethical implication of humanoid robots representing social groups. Three arguments are discussed: Firstly, stereotypically designed robots can make implicit stereotypes explicit and hence raise awareness. Secondly, robots can represent underrepresented social groups and thus draw attention to minorities. However, thirdly, the question is raised whether the representation by a robot is legitimate or on the contrary, a dehumanization of the respective group. The aim of this paper is not to answer whether robots should or should not represent social groups but raise awareness of the problem of stereotypical design.


Thursday August 22, 14:10-14:40 CEST, Auditorium 3 (1441-113)

Elodie Malbois, University of Geneva, Switzerland

Elodie Malbois is a philosopher specialized in the field of bioethics. She is currently a post-doctoral fellow at the University of Geneva, where she is part of the Ethics Transversal Taskforce of a project on the evolution of language (NCCR Evolving Language). In the previous years, she completed a project on the use of social robots in medicine at the University of Göttingen and at the University of Southern Denmark. She received her PhD from the University of Fribourg (Switzerland) in 2020.  

Beyond Value-Based Design: For a More Comprehensive and Supportive System of Ethical Robot Development

Concern with the potential harmful effects of the implementation of multimodal AI in robots and of AI in general is rising. There is more and more recognition that roboticists should not be left alone with the development of ethical technologies and that they need the support of experts from other disciplines. Interdisciplinary design methods are developed to make sure that the new technologies serve and are respectful of their target users. This paper argues that while methods like value-based design are essential to ensure that new social robots serve their target population, they are not sufficient. Risks connected to the unintended use of the technology are likely to be overlooked and some risk mitigation strategies might not be accessible to robot designers. Lastly, the current system does not offer enough support or incentives to engage in a time-consuming although ethical process. We end by identifying multi-level strategies to improve the situation. 


Thursday August 22, 14:45-15:15 CEST, Auditorium 3 (1441-113)

Samantha Stedtler, Lund University, Sweden

Samantha Stedtler's focus lies on social robotics, Human-AI Collaboration and 4E cognition. She is specifically interested in the effect of failures during HRI and how these are entangled with expectations, norms, decision-making and interaction dynamics. In addition to that, she is also interested in using concepts from feminist technoscience, robophilosophy and AI ethics in her research. 

Who is responsible? Social Identity, Robot Errors and Blame Attribution

This paper argues that conventional blame practices fall short of capturing the complexity of moral experiences, neglecting power dynamics and discriminatory social practices. It is evident that robots, embodying roles linked to specific social groups, pose a risk of reinforcing stereotypes of how these groups behave or should behave, so they set a normative and descriptive standard. The societal roots of these roles necessitate an examination of how individuals react to errors based on the assumed social identity of the robot. This theoretical and empirical gap becomes even more urgent to address as there have been indications of potential carryover effects from Human-Robot Interactions (HRI) to Human-Human Interactions (HHI). We therefore urge roboticists and designers to stay in an ongoing conversation about how social traits are conceptualised and implemented in this technology. Apart from considering ethical aspects in the design phase of social robots, we see our analysis as a call for more research on the consequences of robot genderedness, blame attribution, emotional responses, and stereotyping.