Cultural impact and Techno Politics

 SESSION 1 | Tuesday, August 20, 14:10-16:25 | Auditorium 2 (1441-112)


Tuesday August 20, 14:10-14:40 CEST, Auditorium 2 (1441-112)

Oleksandra Sushchenko, Aalto University, Finland

Oleksandra holds a master’s degree in philosophy and is currently conducting research in the field of digital culture, technology, and new media. Her work specifically explores how emotions mediate perception and knowledge acquisition, with the goal of identifying best practices for designing digital services that enhance communication in educational and community-building contexts. Additionally, her research addresses ethical considerations and aims to improve users' media literacy. 

Olena Yatsenko, Aalto University, Finland

Olena has a background in philosophy. Her research focuses on the issues of data ethics, ethics of robotics and AI. The research projects in which Olena works are aimed at studying the interaction between humans and robots, searching for optimal educational strategies in the development and operation of robots, the problem of scientific communication because of the implementation of ethical requirements in the design of management and technical solutions, the features and dynamics of digital culture. 

Sarah Dégallier Rochat, Aalto University, Finland

Sarah has a mixed background in mathematics, robotics and psychology. Her research focuses on the development of inclusive human-machine interfaces for the control of robots in industrial and lab environments. Together with her group, she develops interfaces that leverage on human-machine complementarity and worker task expertise. Her main areas of research include no code interfaces, AR-based guiding systems, tangible programming and agile automation. 

Losing Human Identity: Current Threats and Challenges

The authors of the article examine the reasons for the pessimistic perspective, that envisions the possibility of catastrophe for humanity caused by the “singularity.” We argue that the interaction between humans and machines is neither a matter of competition nor collaboration. Instead, a more realistic outcome of human-machine interactions is the erosion of the genuine foundations of human identity nature. This perspective is based on an analysis of the essential elements that constitute human identity, as well as the most significant ways in which robotics and artificial intelligence (AI) affect individual, social, and cultural life. Through this analysis, we aim to identify the threats and opportunities, as well as propose viable solutions for the preservation and sustainable development of humanity.  


Tuesday August 20, 14:45-15:15 CEST Auditorium 2 (1441-112)

Dane Leigh Gogoshin, University of Helsinki, Finland

Dane’s major research interests lie at the intersection of normative ethics, the philosophy of action, and the ethics of technology. She is particularly keen to elucidate the relationships between autonomy and responsibility, and AI and social robotics. In her dissertation, she proposes a novel view of moral agency, which combines insights from both moral influence and capacitarian accounts of responsibility. Her most recent publication, “A way forward for responsibility in the age of AI,” appears in the journal, Inquiry: An Interdisciplinary Journal of Philosophy.

AI and Agendas

In this paper, I argue that the primary AI-related ethical issue is that of the agendas driving AI. While there is a portion of AI-related ethical risks which stems from indeterministic elements of AI and their interaction with existing social and economic systems, because these are indeterministic and so not predictable, these risks will need to be addressed on a primarily reactionary basis. There is room here for ethical guidelines aimed at reducing these risks; however, it is in the area where we have clear, front-end control that we should focus our efforts. I will suggest that the current concerns of AI ethics, relating to the nature of AI – its non-transparent, “black-box” nature and its lack of moral agency – do not belong to this area. It is rather the black-box nature and moral agency of the agendas driving AI development and deployment which constitute the primary threats and which are almost entirely, in principle, within our control. It is these threats which I attempt to uncover and suggest ways to manage. 


Tuesday August 20, 15:20-15:50 CEST Auditorium 2 (1441-112)

Jakob Stenseke, Lund University, Sweden

Jakob Stenseke is a PhD candidate in philosophy at Lund University, broadly interested in the three Ms: minds, machines, and morality. His PhD project - titled "How to build nice robots: ethics from theory to machine implementation" - explores the possibilities and challenges for creating artificial moral agents (AMAs): artificial systems capable of acting in reference to what is morally good or bad. This includes the theoretical possibility (whether and to what extent artificial agents can be moral?), normative desirability (how, in what way, and why do we want ethical machines?), and technical engineering (how do you build ethical AI?) of artificial moral agents. 

Optipolitics: Is Politics an Optimization Problem?

Can AI ‘solve’ politics? In this paper, I explore optipolitics, i.e., the idea that politics – along with other complex social issues – can be framed as a mathematical optimization problem and solved as such. I begin by describing politics, optimization, and some reasons for applying the latter to the former. I then present a liberal democratic version of optipolitics and try to defend it against eight fundamental challenges. While I concede that none of the challenges can be satisfactorily overcome at present, many can be shown to be at least as difficult to overcome in conventional representative democracy. Most importantly, as a logical endpoint of AI-driven technocracy, I argue that optipolitics can serve as a useful starting-point for a critical debate about the future of democratic governance in the age of machines. 


Tuesday August 20, 15:55-16:25 CEST Auditorium 2 (1441-112)

Luca M. Possati, University of Twente, The Netherlands

Luca M. Possati serves as an Assistant Professor at the University of Twente in the Netherlands, specializing in human-technology interaction. He is also a senior researcher for the international research program ESDiT (Ethics of Socially Disruptive Technologies). 

He is additionally part of the global NHNAI (New Humanism in the Time of Neurosciences and Artificial Intelligence) project. 

Trained as a philosopher, he has held positions as a researcher and lecturer at the Delft University of Technology in the Netherlands, the University of Porto in Portugal, and the Institut Catholique in France. He has also been an associate researcher with the Fonds Ricoeur and the EHESS (School for Advanced Studies in the Social Sciences). 

His research is primarily focused on the philosophy of technology, postphenomenology, and the psychology of technology. Additionally, he engages in software studies. 

The Historical and Geopolitical Limit of Responsible Innovation

The central argument of this paper is that the frameworks of responsible innovation (RI) and technology assessment (TA) are rooted in an antiquated political and geopolitical paradigm, thus necessitating a conceptual overhaul. This argument is supported by two primary reasons. First, RI and TA are not neutral towards technological innovation; instead, they inherently align with a specific political and geopolitical model: the liberal world order (LWO). This model currently faces significant challenges and crises, which we investigated through a literature review of RI and TA and a subsequent political and geopolitical analysis. Second, the very essence of our technologies has dramatically transformed over the past 20 years. We now live in a world dominated by intricate global engineering systems that are not only political but also geopolitical in nature. These transnational systems influence the decisions and interactions of nations. The current LWO framework struggles to effectively grasp and manage these influential global systems. In addition, this paper presents a reinterpreted version of Rodrik's trilemma. This reformulation was designed to consolidate and expand upon the insights already gained. It revisits the issues identified, emphasising the urgency of revamping both TA and RI. As we embark on this reassessment, the invaluable insights from philosophical reflections should not be underestimated.