Limitations

 SESSION 18 | Friday, August 23, 10:55 – 12:00 | Session Room 1 (1441-110)


Friday August 23, 10:55-11:25 CEST, Session Room 1 (1441-110)

Nathaniel Gan, National University of Singapore, Singapore

Nathaniel Gan is a postdoctoral research fellow at the National University of Singapore. He did his B.Sc. (Mathematics) and B.A. (Philosophy) at the National University of Singapore in 2017 and his Ph.D. (Philosophy) at the University of Sydney in 2020.   Nathaniel’s main areas of research include the philosophy of AI, philosophical logic, and the philosophy of mathematics. 

Learning the logic of simulation

Some social robots use simulations to make sense of their environment, and it has been claimed that the use of simulations allows these robots to adapt better to novel scenarios. This paper formulates a logic to model simulations run by artificial intelligence (AI) systems, with the goal of assessing the prospective capabilities of simulation-based robots. The semantics for sentences about simulations are modelled by a possible-worlds framework with a variably-strict accessibility relation. It is argued that worlds in simulation logic are best represented by paracomplete sets of sentences with limited closure under classical entailment. The accessibility relation is intended to delineate worlds in which the explicit stipulations of a simulation hold, as well as the relevant features of our world—possible constraints are discussed to align the accessibility relation with its intended interpretation. The notion of relevance is observed to be crucial for simulation logic, and represents a present limitation for the generalisation of simulation-based robots. Possible ways of overcoming this limitation are suggested, as well as possible avenues for philosophical investigation.


Friday August 23, 11:30-12:00 CEST, Auditorium 3 (1441-113)

Anders Lisdorf, University of Copenhagen, Denmark

Dr. Lisdorf's  main research areas are in emerging technologies particularly Aritficial Intelligence: the limitations of  AI, how to make practical use of AI in the real world and the nature and possibility of Artificial General Intelligence.  He wrote three books (about Smart Cities Technology, Cloud Computing and Cryptocurrencies) as well as articles about the cognitive basis of intentionality and religion.

The Hard Problem of AI and the Possibility of Social Robots

Some philosophers believe it is inevitable for us to create an Artificial General Intelligence (AGI), while others argue that it is impossible to achieve. Instead of accepting or dismissing the possibility of AGI, it would be more helpful to focus on understanding what it would take to create an intelligence that resembles a human, including our social intelligence. While Artificial Narrow Intelligence (ANI) addresses a fixed problem-solving domain defined by externally defined problems, an AGI would have to be able to identify problems by itself. This presents a challenge similar to the one identified by David Chalmers in the philosophy of mind, known as "The Hard Problem of Consciousness." In AI, the hard problem is: where do problems come from? To answer this question, we need to understand the nature of a problem. There are two types of problems: First-order problems arise ipso facto dynamically out of an entity’s interaction with the environment, while second-order problems arise alio facto from another entity’s problems. Current artificial intelligence solves second-order problems given to them by their human designers. To create a social robot with AGI it would have to be able to find and solve its own first-order problems while engaging with humans as part of its environment. A thought experiment illustrates how that might occur.