Workshop 12: AI needs a Prefrontal Cortex (and steps towards generating one)

Organizer

Susanna Schellenberg, Rutgers University, USA

Susanna Schellenberg is Distinguished Professor of Philosophy and Cognitive Science at Rutgers University. Currently, she is working on issues at the intersection of AI, neuroscience, and philosophy. She is the recipient of numerous awards, including a Guggenheim, a Humboldt prize, a NEH fellowship, and a Mellon New Directions Fellowship. In a series of papers culminating in her book The Unity of Perception: Content, Consciousness, Evidence (Oxford University Press, 2018), she has developed an integrated theory of perception that is sensitive to evidence from neuroscience, cognitive psychology, and psychophysics. In addition to perception, the topics she has tackled include consciousness, evidence, cognitive capacities, representations, and imagination.

Abstract

There is currently a lot of discussion in AI circles of AI needing a prefrontal cortex, that is, an equivalent of the executive function and planning center in our human brains. The idea is that if AI had an analog to a prefrontal cortex, many of its existing problems could be solved. Social robots, in particular, require planning and specifically updating their plans in real time in an ever changing environment with unpredictable inputs from their human interlocutors. In humans, such reorientation and updating of plans is achieved through engagement of the prefrontal cortex. The proposed workshop explores how such real time updating of plans can be artificially recreated for social robots.


Speaker

Susanna Schellenberg, Rutgers University, USA

Susanna Schellenberg is Distinguished Professor of Philosophy and Cognitive Science at Rutgers University. Currently, she is working on issues at the intersection of AI, neuroscience, and philosophy. She is the recipient of numerous awards, including a Guggenheim, a Humboldt prize, a NEH fellowship, and a Mellon New Directions Fellowship. In a series of papers culminating in her book The Unity of Perception: Content, Consciousness, Evidence (Oxford University Press, 2018), she has developed an integrated theory of perception that is sensitive to evidence from neuroscience, cognitive psychology, and psychophysics. In addition to perception, the topics she has tackled include consciousness, evidence, cognitive capacities, representations, and imagination.

Reflexive representation: a first step towards generating a pre-frontal cortex

I argue that a first step towards generating an artificial planning center requires "self"-representation or, more generally, reflexive representation. The capacity for reflexive representation is the key component to having a subjective perspective. Indeed, what philosophers call “de se content” or reflexive representation is one of the hallmarks of human intelligence and required for any human social interaction. It is required for relative localization, distinguishing between self-caused and externally caused content, real-time updating when executing a plan, and distinguishing between self-caused and externally caused movement. This paper argues that many of the current problems of AI can be addressed with reflexive representation. With a focus on Facebook’s Ego4D project of teaching AI to have a subjective perspective, I argue that a reflexive representation module is necessary for social robotics to reach the next level of development.  



Speaker

Jacob Russin, University of California Davis, USA

 Jacob Russin is a postdoc at Brown University working with Michael Frank and Ellie Pavlick. Previous to moving to Brown, he worked at Meta AI and Microsoft Research (Summer 2020). His work lies at the intersection of computational neuroscience and machine learning, with a particular focus on: 1) Compositionality, systematicity, and reasoning in neural networks, 2) Neural network models of learning and inference with cognitive maps, 3) Models of human reinforcement learning, with a focus on temporal abstraction, and 4) Biologically plausible models of predictive learning and human vision. 

Emergent cognitive flexibility in large language models

Deep neural networks have revolutionized artificial intelligence but seem to fail in domains requiring specific capacities such as reasoning, planning, or inferring compositional rules — all functions that have all been associated with the prefrontal cortex (PFC) in humans. However, recent large language models (LLMs) have in some cases demonstrated impressive advances in these domains. In particular, these models exhibit the ability to learn from examples given in context, mirroring an important aspect of human cognitive flexibility. Do these emergent “in-context learning” capabilities replicate the PFC-like functions that seemed to be lacking in previous deep neural networks? In this talk, I will argue that although there are important differences between the kinds of cognitive flexibility manifested by humans and LLMs, there are also unexpected similarities, which can provide unique insights into each.