SESSION 9 | Thursday, August 22, 10:25 - 12:05 | Session Room 1 (1441-110)
Glenda Hannibal works as a Research Associate (postdoc) in the “Excellence in Digital Sciences and Interdisciplinary Technologies” (EXDIGIT) funded by the Federal State of Salzburg and in collaboration with Salzburg Research and Innovation Salzburg GmbH. She obtained a BA and MA in Philosophy (Aarhus University) and a Doctorate in Computer Science (TU Wien). Glenda has contributed to research in AI and robotics in the areas of human-robot interaction and explainable AI. Her main research interests include the topics of AI, social robotics, philosophy of science, epistemology, philosophy of trust, and AI ethics.
While scientific and technological advancements in artificial intelligence and robotics are making autonomous vehicles feasible, social and ethical questions are also being raised to ensure a responsible deployment. Much discussion has especially revolved around potential situations of life-threatening accidents related to the use of autonomous vehicles in unpredictable traffic environments. In response to whether the infamous trolley problem can be used to guide ethical decision-making for autonomous vehicles, we will in this paper provide a metaphilosophical analysis of the methodological difference between thought experiments in philosophy and problem-solving in science and engineering. We argue that such analysis can further understanding and collaboration between philosophers and computer scientists working on artificial intelligence and robotics.
Dr. Nao Kodate is Associate Professor in Social Policy and Social Robotics, and the founding Director of UCD Centre for Japanese Studies (UCD-JaSt). His research straddles comparative healthcare politics and policy, and science & technology studies (STS). Key themes include: care and caring, health services research, systems thinking, safety & care quality, social robotics, welfare technologies, implementation science, and organizational learning. His recent research projects have been looking at the impact of digitalization and eHealth (e.g. robots) on care, patient safety regulation (e.g. incident reporting systems), and gender equality in science and technology education. His books include "Japanese Women in Science and Engineering: History and Policy Change" (Routledge, 2015), "New International Handbook on Social Welfare in UK & Ireland" (旬報社, 2019, in Japanese), and "Systems Thinking for Global Health" (Oxford University Press, 2022). He co-produced a documentary film "Circuits of Care: Ageing and Japan's Robot Revolution" (2021).
Care robots are now seen as part of the solution to global aging. This paper asks: how has responsible robotics been perceived? What are the missing elements that would enable responsible robotics (in research, development, and wider use) in different jurisdictions? In order to answer these questions, the article explores the views of welfare technology (WT) developers concerning the current state of robotics development and use in care settings in Ireland and Japan. Semi-structured in-depth interviews were conducted with 14 technology developers in total. The findings indicate that technology developers strongly believe that the use of care robots and WTs would strengthen the long-term care systems in both countries, if ethical and other aspects are taken into consideration. The uniform long-term care system can facilitate the top-down introduction of care robots, but a mismatch can be widened between users’ needs and the solutions that WTs can provide. The inclusion of WTs in professional curricula and training programs and changing the often-skewed media representation of AI and robotics were presented as a possible way forward.
Karolina Zawieska is an SSH researcher in the field of Human-Robot Interaction (HRI) and roboethics. Until recently, she was with the School of Culture and Society (CAS) at Aarhus University, Denmark as a Researcher as well as a Marie Skłodowska-Curie Individual Fellow (postdoc). She also completed a postdoctoral project at the Centre for Computing and Social Responsibility (CCSR) at De Montfort University, UK. Her other professional assignments up to date include serving as a member of Poland's delegation to the United Nations CCW meetings on Lethal Autonomous Weapon Systems (LAWS) as well as a member of the Horizon 2020 Commission Expert Group to advise on specific ethical issues raised by driverless mobility.
Robotics and AI technologies often have been perceived as radically innovative technologies that likely will cause problematic disruptions of society. In particular, ethical concerns about the simulation of high-level human capacities in social robots have engendered calls for various forms of ‘top-down’ regulation, guided by pre-set values and standards. Working from insights of a case study with mobile robots, we suggest in this discussion paper that we might do better in pursuing responsible innovation bottom up: Moving among robots with minimal social interaction skills, citizens may practically acquire the technological knowledge that can protect them against anthropomorphizing overinterpretations and inappropriate attachments. We unfold this suggestion in three steps: First, reporting on our case study, we describe the limited role of value considerations among the design goals that currently guide developers and early adopters of mobile robots in the retail industry. Second, based on an observation from a small ethnographic study and related HRI research on mobile robots, we derive a proposal for responsible innovation bottom up, as supervised social experiment. Third, we discuss the proposal, pro and contra, in the different voices of our project group.