Aarhus University Seal / Aarhus Universitets segl

Meet Johanna Seibt

Meet Johanna Seibt

When Humans and Robots Meet


By Jeppe Kiel Revsbech

Together with her team, Professor Johanna Seibt has created the philosophical concept robophilosophy. With researchers from, among other things, philosophy, psychology and engineering, they examine how the entry of social robots in society affects the nature and self-understanding of humans. Are they our friend or enemy? And what should we take into account when the robots of tomorrow are being developed?

They can vacuum the floors and mow the lawn. They can deliver packages and serve food. But how far are we willing to go to bring robots into the society? And can the increasingly advanced machines help us become ‘better humans’?

These are some of the questions Professor of Philosophy, Johanna Seibt, and her colleagues at the Research Unit for Robophilosophy and Integrative Social Robotics at Aarhus University are examining from the philosophical field of robophilosophy.

The term was formulated and introduced by the research unit in 2014 and, as the name implies, stems from modern robotics in which robots are created and designed to interact and communicate with humans – also known as ‘social robotics’. The examples of the technology are manifold: From calming robots that bring joy to people with dementia, to lifelike sex robots that satisfy inner lusts.

“Robophilosophy is defined as ‘philosophy of, for, and by social robotics.’ The goal of the technology is to bring robots into the society which means creating artificial ‘social others’. It is a defining step – for the first time in the history of the technology, we are creating something that is not just a tool to be used, but a ‘social other’ with whom we interact. Social robotics is thus a defining step in the history of the technology that challenges the human self-understanding like never before. And robophilosophy is the answer to that challenge,” Johanna Seibt explains. 

Meet a Research Group

She emphasises that this article should actually be called ‘meet a research group’, because the concept behind robophilosophy is largely based on a new interdisciplinary approach to research that the group has developed, which sees powers from, among others, philosophy, psychology, anthropology, neuroscience and engineering contribute to the work of the research unit. The idea is to create a more holistic and responsible look on the development of social robots – an approach lacking in current technology, according to Johanna Seibt.

“While climate change without a doubt is the most pressing problem we are facing, the next problem in line is irresponsible technological development. At present, we are developing technologies with potentially far-reaching implications on society, yet we do it without the right expertise and carefulness,” the philosophy professor says.

“It is ironic that plant experts, for instance, are involved in developing robots that can weed; but when developing robots that are designed to act in the physical and symbolic space of human social relations, we do not involve researches from the humanities and social sciences in the technological development. It is ironic that in a time where engineers need the humanities more than ever, the humanistic educations are facing downsizings,” she points out.

Among other things, the robophilosophy team makes use of the robot Telenoid in their research. The robot is designed by the Japanese robot developer Hiroshi Ishiguro, who is here pictured with a copy. Archive photo.

The Non-Replacement Maxime

To accommodate the problem of technological irresponsibility, the robophilosophy team has defined a new ‘paradigm’ to the development of robotics. It is called ‘Integrative Social Robotics’. One of the key principles is the so-called ‘Non-Replacement Maxime’, which goes: ‘Social robots may only do what humans should but cannot do.’

“We have a situation where so-called ‘robot ethicists’ discuss whether it is ethically allowed to replace humans with social robots, and whether robots should have rights. That discourse is important, and robot engineers are partly involved in it, but it takes time for it to lead to lawful regulation. Our strategy is to show what cooperation with the humanities can do. We bring sociocultural values directly into the technology development,” Johanna Seibt explains and provides a specific example of the group’s work:

“Recently, we were able to show how conflict mediation facilitated by robots leads to more and more creative solutions to conflicts. It was a particularly complex experiment and an important result. Similarly, we have examined whether job seekers preferred having a personal conversation with a neutral looking robot to minimise unjust decisions due to preconscious ethnic and gender prejudices,” she tells.

Groups vs. Social Robots

At present, the research unit is occupied with a project that examines the processes and experiences that arise when groups of humans interact with social robots. Exactly how the research is conducted, Johanna Seibt keeps to herself, but well-known are the robophilosophy team’s robots Telenoid, which with its distinctive appearance is developed from the idea of ‘the minimal human’ and designed by the Japanese robot developer Hiroshi Ishiguro.

“The Nordforsk-project, which is a cooperation between Royal Institute for Technology in Stockholm, University of Helsinki and the University of Southern Denmark, examines the behaviour, experiences and preconscious processes in members of a group interacting with a robot. We would like to explore if robots, and if so which types of robots, are better at facilitating group creativity and certain forms of decision-making,” Johanna Seibt says about the project.

In connection with a portrait for the Carlsberg Foundation, Johanna Seibt gave an introduction to robophilosophy and to how social robots affect us as humans. Video: The Carlsberg Foundation.

The Collingridge Dilemma

The predominant question is, however, what potentials and challenges the use of social robots hold for the society of tomorrow. What abilities can the robots have, and will we be able to consider them machines if they get human qualities?

“That is a question unanswerable to everyone at the moment. In best case, we can examine the specific application in a specific situation over a specific period of time,” Johanna Seibt says.

“Social reality is fantastically complex and dynamic. As we have pointed out, we are currently trapped in a particularly disturbing version of the so-called ‘Collingridge Dilemma’ within management of technology: We are letting loose in the society a technology, even though we cannot at this point predict its fundamental implications. At a later time, once we see its consequences, it is too late to withdraw that technology from society,” the professor explains and elaborates:

“The main issue with social robotics is that social robots are radically new objects – they prompt our preconscious mechanisms for social cognition so that we experience them as ‘social others’. But they do not quite fit into our current concepts of social agents, and it is not clear if we will be able to learn to establish a new category for something that understands certain aspects of social interaction and simulates other. Engineers are focusing on the functional aspects of social interactions, but the function is a small part of what ‘actually happens’ when we interact with something in the symbolic space of human social interactions,” Johanna Seibt says.

The Good Example

To the research unit, it is about setting an example in terms of the development of social robotics. This is to be done by three special end goals that apply to each of the six research projects the team is working on.

“Firstly, we want to avoid that income generation becomes the primary concern in robot development as it has been with social media. We share this goal with many of our international colleagues, who also apply non-academic strategies, for instance lobbyism at national and European legislators,” Johanna Seibt tells.

“Opposed to this – and this is our second goal – our strategy is to find and develop constructive examples of how this new technology can be used in culturally sustainable ways. We want to change by being the good example. The engineers working within social robotics are in my experience creative and well-meaning human beings, so to offer new perspectives by illustrating them is perhaps the best invitation to create a closer future cooperation,” the professor points out and emphasises the third – and perhaps most important – goal to the idea behind robophilosophy:

 “We want to communicate to the public and to politicians that we need the expertise of the humanities to analyse human experiences and processes of individual and social sense-making if we want to get the best out of advanced technologies and create a future in which it is worth living.”    

ROBOPHILOSOPHY

  • Robophilosophy is defined as “the philosophy of, for, and by social robotics.’
  • The term was formulated and introduced by the Research Unit for Robophilosphy and Integrative Social Robotics at Aarhus University in 2014 and stems from modern robotics, in which robots are created and designed to interact and communicate with humans – also known as ‘social robotics’.
  • The research unit consists, besides Johanna Seibt, of researchers within, among others, philosophy, psychology, anthropology, neuroscience and engineering.
  • Read more about the research of the group at robophilosophy.org