Emma Julia Cherny is a doctoral student in Information Systems Department at Abo Akademi University, researching conceptual modelling of agentic technology.
Inclusive realities, where decision making in all spheres of life becomes more and more shared with algorithmic technologies or presence of robots, either in traditional or new social phenomena, requires reconceptualisation of design approaches and the improving of general understanding of role and responsibility of the algorithm. Legal scholarship is the final instance for building the definitions, and the current state of research in Law is polemic. The panel reconnects professors working with different aspects of legal innovation and law transformations in regard of robots arriving to people's everyday life. Telepresence, 5G, sensor technologies, and the new types of subjectivity, augmented reality and metahuman systems require the consideration of legal aspects and the analysis of Law. Many questions are solved by theorists and practitioners on the disruptive social innovation with robots.
Lyria Benett Moses is Director of the Allens Hub for Technology, Law and Innovation and a Professor and Associate Dean (Research) in the Faculty of Law and Justice at UNSW Sydney. She is also co-lead of the Law and Policy theme in the Cyber Security Cooperative Research Centre and Faculty lead in the UNSW Institute for Cyber Security. She is on the NSW Information and Privacy Advisory Committee, the Executive Committee of the Australian Chapter of the IEEE’s Society for the Social Implications of Technology, and is a Fellow of the Australian Academy of Law.
What could go wrong? There have been some suggestions that robots and artificial intelligence might be given legal personality, creating an entity that can be sued when something goes wrong. Granting legal personality does not hinge on being “like” humans as other legal constructions (such as corporations) have legal personality. However, there are a number of challenges, including: (1) identifying the legal person within complex systems; (2) ensuring accountability in the absence of any psychological fear of punishment, and (3) defining pre-requisites to legal personality. Beyond that is the question of what problem we are trying to solve and whether legal personality is the best solution.
David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters and has published thirteen internationally recognized books, including Thinking Otherwise: Philosophy, Communication, Technology (Purdue University Press 2007), The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the
position of Presidential Research, Scholarship and Artistry Professor in the Department of Communication at Northern Illinois University (USA).
This paper seeks to make sense of the social significance and consequences of technologies that have been deliberately designed and deployed for social presence and interaction. The question that frames the examination—“Should social robots have standing?”—is derived from an agenda-setting publication in environmental law and ethics written by Christopher Stone, Should Trees Have Standing? Toward Legal Rights for Natural Objects. In extending this mode of inquiry to social robots, the paper will 1) investigate whether and to what extent robots can or should have standing, 2) evaluate the benefits and the costs of recognizing legal status, when it involves technological objects and artifacts, and 3) respond to and provide guidance for developing an intelligent and informed plan for the responsible integration of social robots.
Amedeo Santosuosso is a representative of the Italian government in the technical negotiation in the process of elaborating a UNESCO recommendation on the ethics of artificial intelligence (AI). Scientific Director of the interdepartmental research center European Centre for Law, Science and New Technologies (ECLT), University of Pavia (I). September 2021 - January 2023, part-time professor at the Robert Schuman Centre for Advanced Studies, European University Institute (Florence, I). Since 2018, April: member of the World Commission on Ethics in Scientific Knowledge and Technology (COMEST -UNESCO). Since AA 2019-20: professor of Law and Information Technologies, Scuola Universitaria Superiore IUSS Pavia (I). Since 2015: fellow at the Center for Legal Innovation (CLI), Vermont Law School (USA) 2004-2014: founder and president of the interdepartmental research center European Centre for Law, Science and New Technologies (ECLT), University of Pavia (I).
It is possible that there are good reasons not to recognize a legal personality for autonomous systems, but there is no good reason not to debate the topic. On February 16, 2017, the European Parliament asked the Commission to explore the possibility of recognizing a specific legal status for robots, so that at least the most sophisticated autonomous robots can be considered as electronic persons responsible for compensation for any damage they cause. The reaction is one of total closure. However, there are several good reasons to critically re-examine the reasons for this opposition.
Ben Wagner is Assistant Professor at the Faculty of Technology, Policy and Management and Director of the AI Futures Lab at TU Delft. He is also Professor of Media, Technology and Society at Inholland. Previously, Ben served as founding Director of the Center for Internet & Human Rights at European University Viadrina, Director of the Sustainable Computing Lab at Vienna University of Economics and member of the Advisory Group of the European Union Agency for Network and Information Security (ENISA). He is a visiting researcher at the Human Centred Computing Group at
Oxford University, advisory board member of the data science journal Patterns and on the International Scientific Committee of the UKRI Trustworthy Autonomous Systems Hub.
Existing accountability mechanisms for AI and robotics are heavily lacking. While novel regulatory solutions like the European AI Act (AIA) attempt to define red lines on AI which is impermissible in Europe, their deference to industry standards is so great that the relevance of the whole AIA as a regulatory tool has been questioned. At the same time both the IEEE and other industry standards bodies are pushing for more ethical AI and robotics or even ethical black boxes. But will these novel measures provide for actual accountability in practice? We argue that these measures will be unable to provide meaningful accountability in practice while they remain dependent on current industry standards bodies. While the wish to collaborate with industry is understandable in terms of developing feasible solutions, that reality of current industry collaborations seems to suggest making accountability impossible. In conclusion, we will suggest ways of re-configuring the public-private relationship to ensuring meaningful accountability can be achieved.