Consciousness, Autonomy, and Meaning

 SESSION 10 | Thursday, August 22, 13:35 – 17:10 | Auditorium 2 (1441-112)


Thursday August 22, 13:35-14:05 CEST, Auditorium 2 (1441-112)

JeeLoo Liu, California State University, USA

JeeLoo Liu, PhD. Professor of Philosophy at California State University, Fullerton.  She was named 2019 Carnegie Fellow for her research project: Confucian Robotic Ethics. She has authored Neo-Confucianism: Metaphysics, Mind, and Morality (Wiley-Blackwell 2017), An Introduction to Chinese Philosophy: from Ancient Philosophy to Chinese Buddhism (Blackwell 2006). She also co-edited Consciousness and the Self (Cambridge University Press, 2012), and Nothingness in Asian Philosophy (Routledge 2014). Her primary research interests include philosophy of mind, metaphysics, Chinese metaphysics, Confucian moral psychology, Neo-Confucianism, Confucian moral sentimentalism, and more recently, Confucian robot ethics. 

Could Social Robots without Phenomenology Be Morally Competent to Handle Moral Dilemmas?

On the basis of Ned Block’s distinction between cognitive accessibility and phenomenology (previously known as access consciousness and phenomenal consciousness), this paper argues that social robots equipped with mere cognitive accessibility but no phenomenology could nonetheless be morally competent to engage in moral deliberation and decision-making in scenarios involving moral dilemmas. Inspired by Bertram F. Malle’s advocacy of moral competence, this paper aims to establish moral competence of social robots without assuming that they have achieved the status of moral agents. Drawing on a survey conducted by the author with the human-in-the-loop methodology, the paper presents sample scenarios involving ethical dilemmas in assisted suicide, truth-telling, rescue operations, and law enforcement intervention, and argues that social robots with sufficiently constructed cognitive access will have the resources—being able to cognitively access and evaluate the relevant information and context—to handle these dilemmas in alignment with human values. What is required for social robots to obtain moral competence is not the ability to feel, to empathize, or to know what it is like to be them. It is rather the cognitive architecture of reasoning and information processing, aided with an appropriate moral framework. This paper employs the moral framework Confucian virtue ethics.  


Thursday August 22, 14:10-14:40 CEST, Auditorium 2 (1441-112)

Meriem Beghili, Sorbonne University, France

Meriem Beghili is a 2nd year PhD student. Her thesis focuses on the ethical issues related to the use of social robots in healthcare. This research is based both on the theoretical aspect of philosophical literature and on the actual practical aspect of the use of robotics by caregivers.

Anouk Barberousse, Sorbonne University, France

Prof. Anouk Barberousse is a philosopher of science, mainly working on modeling, scientific theorizing, and the use of data. She has been involved in multiple multidisciplinary projects and co-supervision of PhD students and post-docs.  

« Expert reports by large multidisciplinary groups: the case of the International Panel on Climate Change », with Isabelle Drouet, Daniel Andler and Julie Jebeile, 2021, Synthese,https://doi.org/10.1007/s11229-021-03430-y 

« Model spread and progress in climate modelling" with Julie Jebeile, European Journal for Philosophy of Science 11 (3):1-19 (2021)  

Mohamed Chetouani, Sorbonne University, France

Mohamed Chetouani is Professor of Signal Processing and Machine for Human-Machine Interactions at Sorbonne University. He is the Deputy Director of the Institute for Intelligent Systems and Robotics (CNRS). He conducts research in social signal processing, social robotics, interactive machine learning and ethics of interactive systems.  

Ethical Perspectives on Natural Autonomy and Artificial Autonomy

Autonomy is the ability to act, decide and govern oneself independently. From the individual sphere to social dynamics, autonomy appears as a common thread that guides our choices. Traditionally confined to human beings, the concept has expanded to include artificial systems. The same terminology is therefore used to designate two different types of autonomy. How can we define and differentiate between them? On the other hand, what brings them together and justifies the use of the same term? In this paper, we delve into the essence of autonomy in Section 1 and its traditional philosophical definitions, as well as its applications in bioethics and in artificial systems. In Section 2, we define more precisely what an "autonomous system" is, and we give examples of them. Finally in Section 3, we compare autonomy as applied to humans, and autonomy as applied to artificial systems, to finally be able to show the differences and similarities between the two. 


Thursday August 22, 14:45-15:15 CEST, Auditorium 2 (1441-112)

Jan Henrik Wasserziehr, London School of Economics and Political Science, United Kingdom

Jan is a political theorist at the London School of Economics. His PhD thesis, supervised by Katrin Flikschuh and Lea Ypi, explores the politico-philosophical implications of contemporary technological change. He argues that both our conceptions of values and our self-conception as rational beings are in flux due to digitization and AI, destabilizing specifically modernist conceptions of freedom, rationality, and the person. Before joining LSE, Jan worked for several years in a senior role at a political tech startup which specializes in data-driven political campaigning.

The Moral Ambiguity of the AI Consciousness Debate

Increasing debate now exists on whether artificial systems could soon become conscious. Computational functionalists believe that this is possible. Sceptics argue that consciousness in artificial systems is unlikely because those systems lack the right biological substrate. The ethical stakes of the AI consciousness question are often framed as follows: If an artificial system were to become conscious, could we be responsible for preventable suffering? What moral status should such a system have and should it be endowed with rights? I suggest that this framing of the ethical question is most likely wrong. Even if AI systems became conscious, it is unclear whether their consciousness would be relevantly similar to ours. It may well be that such systems’ experience is entirely devoid of pleasure and pain. Thus, consciousness in and of itself is ethically irrelevant. Given the structural epistemic uncertainty about possible AI consciousness, AI ethicists are well-advised to de-emphasize consciousness when evaluating the prospective moral status and agency of artificial systems and instead focus on negative valence – for which there is much less good reason to assume that it could occur in non-biotic, artificial systems. 


Thursday August 22, 15:30-16:00 CEST, Auditorium 2(1441-112)

Benjamin Gaskin, The University of Sydney, Australia

Benjamin Gaskin is foremost focused on the philosophy of mind, primarily from an ontogenetic and phylogenetic perspective. This viewpoint is both fruitfully applicable to questions in artificial intelligence and informed by work on these systems. Benjamin Gaskin is particularly interested in the nature of meaning and reasoning, as well as the interplay between physicality and ideality in these processes. His current major project is a comparative review of the evolution of nervous systems and architectures in artificial intelligence: artificial neural networks, spiking neural networks, and neural organoids.

Symbol Grounding In The Age of LLMs: The Role of Morphological Computation in Multi-modal Language Learning

This paper considers symbol grounding in its practical and theoretical aspects. Taking up the theoretical perspective, we begin by considering the relative inefficiency of large language models in acquiring language. A framework is introduced based on the concept of morphological computation and formalised with reference to conditional Kolmogorov complexity: that the form of embodied experience scaffolds human language acquisition. This argument is extended to consider the symbol grounding problem, with particular reference to the origin of language in both the individual and historical sense. It is argued that, while humans also make use of statistical learning, the process of symbol grounding via morphological computation is essential at the origins of language and during early development. It provides a minimal ontology in terms of objects, containers, processes, etc.—basic features which language models must instead brute force by statistics. The paper closes by reconsidering the symbol grounding problem in light of recent advances, particularly the promise of multi-modal models and robotics, and ultimately concludes that the status of the symbol grounding problem depends upon our aims in the pursuit of artificial intelligence.


Thursday August 22, 16:05-16:35 CEST, Auditorium 2 (1441-112)

Selmer Bringsjord, Rensselaer Polytechnic Institute, USA

Selmer Bringsjord specializes in the logico-mathematical and philosophical foundations of artificial intelligence (AI) and cognitive science (CogSci), in collaboratively building AI systems/cognitive robots on the basis (primarily) of computational logic, and in the logic-based and theorem-guided modeling and simulation of rational, human-level-and-above cognition. 

John Slowik, Rensselaer Polytechnic Institute, USA

John Slowik is a fourth-year PhD student working to develop the hardware and software of the RAIR Lab’s cognitive robot PERI.2. This effort is motivated by an interest in the intersection of formal reasoning, sub-symbolic AI, cognitive architectures, and their deployment to real world tasks through robotics.

James Olswald, Rensselaer Polytechnic Institute, USA

James Oswald is a third year PhD student at RPI working at the intersection of logic and AI. James' experince includes integration of automated planning with large language models at IBM, and enhancement of congitive architectures with automated reasoners. 

Michael Giancola, Rensselaer Polytechnic Institute, USA

Michael Giancola earned his PhD in Computer Science and MS in Cognitive Science at Rensselaer Polytechnic Institute. His dissertation formalized & implemented an AI agent framework for reasoning about uncertain beliefs within a logic for Theory-of-Mind reasoning. His current work and research interests include formal representations (e.g., grammars, logics), natural language reasoning, and large-language models. 

Paul Bello, US Naval Research Laboratory, USA

Paul Bello is the director of the Interactive Cognitive Systems Section at the U.S. Naval Research Laboratory.  He is the co-developer of the ARCADIA cognitive framework for building attention-centric integrated cognitive systems.  His focus is on the computational foundations of agency grounded in the control of attention, with a special focus on moral agency and responsibility. 

Robot Cognition That is Simultaneously Social, Multi-Modal, Hypothetico-Causal, and Attention-Guided Solves the Symbol Grounding Problem

The so-called symbol-grounding problem (SGP) has long plagued cognitive robotics.  If Rob, a humanoid household robot, is asked to remove and discard the faded rose from among the dozen in the vase, and accedes, does Robbie grasp the formulae/data he processed to get the job done?  Does he for instance really understand the formulae inside him that logicizes “There’s exactly one faded rose in the vase”?  Some (e.g., Searle, Harnad, Bringsjord) have presented and pressed a negative answer, and have held that engineering a robot for whom the answer is ‘Yes’ is, or at least may well be, insoluble.  This negativity increases if Rob must understand that giving a faded rose to someone as a sign of love might not be socially adept.  Bringsjord has in particular argued that a recent, detailed proposal for cracking SGP (from Taddeo & Floridi) fails. 

     We change the landscape, by bringing to bear, in a cognitive robot, an unprecedented, intertwined quartet of capacities that make all the difference: namely, (i) social planning; (ii) multi-modal perception; (iii) pre-meditated attention to guide such perception; and (iv) automated defeasible reasoning about causation.  In other words, a genuinely social robot that senses in varied ways under the guidance of how it directs its attention, and adjudicates among competing arguments for what it perceives, solves SGP, or at least a version thereof.  An exemplar of such a robot is our PERI.2, which we demonstrate in an environment called ‘Logic-Forms,’ intelligent navigation of which requires social reasoning. 


Thursday August 22, 16:40-17:10 CEST, Auditorium 2 (1441-112)

Sara Incao, Italian Institute of Technology Genoa, Italy

Sara Incao is a postdoc researcher at the Italian Institute of Technology (IIT). She received her PhD in Cognitive Robotics and Human-Robot Interaction from the University of Genoa (IT) and IIT and her MA in Philosophy from the Catholic University of Milan (IT). During her PhD, she spent two semesters as a visiting researcher at The University of Memphis, TN (USA) working on the concept of self and embodiment in humanoid robots. Her primary interests lie at the intersection of the phenomenological approach in philosophy and artificial autonomous systems. Additionally, she is dedicated to studying the French phenomenological aesthetics tradition with a particular emphasis on the work of Mikel Dufrenne. She also has interests in exploring the connections between phenomenological aesthetics and digital art. 

Carlo Mazzola, Italian Institute of Technology Genoa, Italy

Carlo Mazzola is a post-doc at the Italian Institute of Technology in the CONTACT unit and research-excellence coordinator of the TERAIS project. He carries out his research in cognitive robotics and human-robot interaction with a focus on architectures for multi-party interaction, human-activity recognition, explainability and transparency in humanoid robots, user-centered interaction design, and shared perception between humans and artificial systems. He obtained his Ph.D. in Bioengineering and Robotics at the University of Genoa, coming from a previous background in Philosophy. 

Giulia Belgiovine, Italian Institute of Technology Genoa, Italy

Giulia Belgiovine (she/her) is a postdoctoral researcher at the COgNTtive Architecture for Collaborative Technologies (CONTACT) unit of the Italian Institute of Technologies (IIT), Genoa, Italy. She obtained her PhD in Bioengineering and Robotics at IIT and spent a visiting research period at KTH in Stockholm, Sweden. She received her Master’s degree in Biomedical Engineering at Università Politecnica delle Marche (Ancona, Italy). Giulia Belgiovine’s research aims at investigating how to develop cognitive architectures for social and collaborative robots to promote better human-robot interactions (HRI) and foster robots’ autonomous learning and adaptive behavior, with a particular focus on multiparty interactions. Her research interests also include lifelong learning and personal and assistive robotics. She is also actively involved in outreach and educational events to bring robotics and AI topics closer to a young and broad audience 

Alessandra Sciutti, Italian Institute of Technology Genoa, Italy

Alessandra Sciutti received her Ph.D. in Humanoid Technologies from the University of Genova (Italy) in 2010. After a Post Doc at the Italian Institute of Technology (IIT) and two research periods in USA and Japan, she became the scientific responsible of the Cognitive Robotics and Interaction Laboratory of the RBCS Dept. at IIT. After being Assistant Professor in Bioengineering at DIBRIS University of Genoa, she is now Tenure-Track Researcher at the Italian Institute of Technology, head of the COgNiTive Architecture for Collaborative Technologies (CONTACT) unit. In 2018 she has been awarded the ERC Starting Grant wHiSPER, focused on the investigation of joint perception between humans and robots. She published more than 60 papers and abstracts and participated in the coordination of the CODEFROR European IRSES project. She is an Associate Editor of Robots and Autonomous Systems, Cognitive Systems Research and the International Journal of Humanoid Robotics and she has served as a member of the Program Committee for the International Conference on Human-Agent Interaction and IEEE International conference on Development and Learning and Epigenetic Robotics.  The scientific aim of her research is to investigate the sensory and motor mechanisms underlying mutual understanding in human-human and human-robot interaction. 

A Roadmap for Embodied and Social Grounding in LLMs

The fusion of Large Language Models (LLMs) and robotic systems has led to a transformative paradigm in the robotic field, offering unparalleled capabilities not only in the communication domain but also in skills like multimodal input handling, high-level reasoning, and plan generation. The grounding of LLMs knowledge into the empirical world has been considered a crucial pathway to exploit the efficiency of LLMs in robotics. Nevertheless, connecting LLMs' representations to the external world with multimodal approaches or with robots' bodies is not enough to let them understand the meaning of the language they are manipulating. Taking inspiration from humans, this work draws attention to three necessary elements for an agent to grasp and experience the world. The roadmap for LLMs grounding is envisaged in an active bodily system as the reference point for experiencing the environment, a temporally structured experience for a coherent, self-related interaction with the external world, and social skills to acquire a common-grounded shared experience