Session 7: Robots in Healthcare

Thursday August 22, 10:25-10:55 CEST, Auditorium 2 (1441-112)

Mate Szondy, Pazmany Peter Catholic University, Hungary 

Clinical psychologist, family therapist, mindfulness teacher, researcher. Teaches as an associate professor at Pázmány Péter Catholic University (Budapest, Hungary) and works as a psychologist at the Jewish Charity Hospital (Budapest, Hungary). His areas of interest include positive psychology (well-being, optimism, mindfulness), the effectiveness of "third wave" cognitive-behavioral therapies, and the impact of technological advancements on mental health. Currently, his main research topic is “positive technology”: how technology (e.g., AI, VR, social robots) can support human flourishing. 

Ágnes Zsila, Pázmány Péter Catholic University, Hungary

Ágnes Zsila is a psychologist working as a senior lecturer at the Institute of Psychology, Pázmány Péter Catholic University, Hungary. She is also a research fellow at the Institute of Psychology, ELTE Eötvös Loránd University. Her research interest encompasses a broad range of cyberpsychology and popular culture. She published studies focusing on celebrity worship, cyberbullying, and excessive media use including social networking sites and video games.  

Ágnes Katalin Magyary, Pázmány Péter Catholic University, Hungary

Behavior Analyst, Psychology MA 
Main research areas: 
Artificial Intelligence in Psychotherapy 

Noémi Zsuzsanna Mészáros, Pázmány Péter Catholic University, Hungary

Psychologist, Counselling Psychologist 

Main research areas:  

Artificial Intelligence in Psychotherapy;  

Mental Health Indicators in Adlerian Individual Psychology; 

The Social Psychology of Collective Victimhood and Intergroup Relations 

Artificial Intelligence in Psychotherapy: Optimal Utilization Strategies

The integration of artificial intelligence (AI) into psychotherapy holds promise for revolutionizing mental healthcare delivery, particularly in addressing the pervasive treatment gap and improving accessibility to services. This paper critically examines the landscape of AI utilization within psychotherapy, focusing on its potential benefits, ethical dilemmas, and optimal conditions for integration. AI systems offer valuable support in diagnosis, treatment planning, and intervention personalization, yet navigating the boundaries between AI-driven interventions and human-centered therapy requires careful consideration. Factors influencing AI implementation include the nature of mental health issues, patient attitudes towards technology, and environmental factors such as stigma and therapist shortages. We discuss potential pitfalls, including ethical concerns, power dynamics, and inherent biases in AI algorithms, and underscore the  importance of maintaining the therapeutic alliance. Moreover, we highlight the synergies between AI-based and human therapy, emphasizing the need for a balanced approach that prioritizes patient welfare and preserves interpersonal connection. 

Thursday August 22, 11:00-11:30 CEST, Auditorium 2 (1441-112)

Anna Dobrosovestnova, Technical University of Vienna, Austria

Anna Dobrosovestnova background is in semiotics, culture studies and cognitive science. Her current work at the intersection of human-robot interaction (HRI), science and technologies studies (STS) explores situated interactions with robots in public spaces and in service sectors, with a focus on social and affective dimensions.

Felipe Gonzalez T. Machado, University of Vienna, Austria

Felipe Gonzalez T.  Machado explores the intersection between cognitive science (4E cognition), existential psychology, and political economy. By combining those disciplines, he investigates topics such as social cognition and political polarization. 

Tim Reinboth, University of Vienna, Austria

Tim Reinboth is a multi-disciplinary scholar and science journalist. His current interest is in curious interactions of technology and society. 

Digital Shadows: Exploring the Other and the I in Communication with Thanabots

The development of large language models (LLMs) has led to the proliferation of chatbot services like ChatGPT, Replika and Project December further contributing to technologically mediated grief. Called variously griefbots, or thanabots, the postulated aim behind these technologies is to help people deal with the loss of their loved ones. Despite hype around these technologies in the media, little is still known about the actual mechanisms of how thanabots contribute to grieving. This paper contributes to answering this question by offering an interpretation of communication with thanabots through the prism of Yuri Lotman's autocommunication model. We start by examining how a thanabot can be framed as a communicational `other'. Then we draw on phenomenology of grief literature to delineate the boundaries of such ``otherness''. In line with the Lotman's auto-communication model, we proceed to argue that conversing with a thanabot can be viewed as an instance of `I--I' communication. Within this model, the bot functions as a secondary code that allows for new meaning to arise in the process of communication and, consequently, for a bereaved person to renegotiate their new identity in the face of loss. 

Thursday August 22, 11:35-12:05 CEST, Auditorium 2 (1441-112)

Salla Jarske, Tampere University, Finland 

Salla Jarske has been working on her doctoral dissertation concerning social robotics since 2020 in Tampere University. The dissertation adopts a critical and interdisciplinary approach to social robotics, combining technology design practices and ethnomethodological perspectives from the social sciences.  

Kirsikka Kaipainen, Tampere University, Finland

Kirsikka Kaipainen (PhD, Information Technology) currently works as a project manager and a coordinator of the Digital and Sustainability Transitions in Society research platform at Tampere University, Finland. She received her doctoral degree in 2014 from Tampere University of Technology in the field of ICT for health. She has studied the applicability of social robotics in the contexts of education, healthcare and young people’s societal participation. More broadly, she is interested in how technologies can be used to promote sustainability, equality and wellbeing, and in the sustainability of technology itself. 

Kaisa Väänänen, Tampere University, Finland

Kaisa Väänänen is Full professor of Human-Technology Interaction in Tampere University, Finland. Kaisa leads the research group of Human-Centered Technology (IHTE) in the unit of Computing Sciences. She has extensive teaching and supervision experience we well as leadership of study programmes. Kaisa has more than 25 years of experience in research related to human-computer interaction both in university and industry. In 1995-2004, she worked at Nokia Inc. In her research, Kaisa is currently focusing on Human-Centered AI and sustainable development supported by digital solutions. She is very active in the international research community, and frequently takes part in organizing conferences such as ACM MobileHCI and NordiCHI, and she was the general co-chair of ACM SIGCHI 2023 conference. In 2022, Kaisa was selected as an ACM Distinguished Member for her long-standing contribution to the field of computing. 

“Maybe it Knows More Than Us” – Exploration of Social Robots as Climate Communicators to Foster Climate Hope in Adolescents

Climate change represents an existential threat that places young people at an increased risk of mental distress, as their entire future is endangered by the rising global temperature. However, climate anxiety, mental distress about climate change, can also coexist with positive emotions such as climate hope. This qualitative study explores the possibilities for social robots to foster climate hope and act as climate communicators based on their previously demonstrated potential to act as an engaging medium for important information. We wrote three scenarios in which a robot is described as talking with youth, each scenario emphasizing one of the central facets of climate communication: empathy, positive information and personal action. Emotional reactions to and opinions about the scenarios were evaluated with 42 groups of ninth graders (n=115, 42 group responses) through an online questionnaire. The results show that all scenarios elicited positive but also negative reactions. The reactions were the most mixed regarding the empathy scenario. Our findings suggest that a robot as a climate communicator could attract young people's interest, and that the robot should be designed to communicate by providing objective information and decision support for personal action rather than empathy. Engaging interaction with such a robot implies a certain degree of artificial intelligence in communication, even though the factual information the robot provides should be drawn from reliable and objective sources rather than be generated with AI. However, the sustainability implications of the concept require careful consideration.