CFP: Conference Topics

Call for Short papers, Workshops, Panels, Posters and Artworks: Topics of RP2024

We invite submissions for research contributions with clear topical focus on one or more of the conference topics listed below. The submissions should be well informed by empirical results of HRI and social robotics research, and be written for an interdisciplinary research discussion on the listed conference topics, from the perspectives of  the following disciplines: Philosophy (ontology, philosophy of mind and cognitive science, epistemology and knowledge representation, political philosophy, and philosophy of technology, also in culturally comparative perspective), Anthropology Psychology, Political Science, Law, Economy, Sociology, Cognitive Science, Communication Studies, Linguistics, Interaction Studies, HRI, Robotics, Computer Science, Engineering, Art. 

Please note the following restrictions:

  • Robophilosphy is defined as "philosophy of, for, and by social robotics", so submissions should focus on social robotics or social robots, or on multimodal AI developed for social robots.  A social robot is, in our characterization, a robot designed to move in the physical and symbolic space of human social interaction, but in the context of this conference research on two-dimensional screen avatars may also be accepted.
  • We can only accept submissions that present new research results and are not published or submitted elsewhere.
  • All and only papers presented at the conference are published in the Conference Proceedings. However, papers that have max. 30% overlap with the (rather short) papers published in the RP Proceedings can be submitted elsewhere.

Conference Topics 

We invite submissions on the following topics (papers may address topics at any numerical level, i.e., more general or more specific topics):

  1. Prospects:
    1. Prospects for advancing global development goals: AI has been hailed as a new way to achieve the UN development goals—what is the specific role of AI driven social robots for these goals? 
    2. Prospects for advancing specific application areas:  Given the expectable dramatic increase in practical competence of AI-driven social robots, in which domains of application will such robots be particularly useful?
    3. Prospects for advancing research in theoretical philosophy: 
      1. Embodied multimodal AI’s are said to solve the symbol-grounding problem and the frameproblem: Are these claims correct?  If so, what are the implications of this philosophical accounts of mind and language, for philosophy of science and technology?
      2. With the advent of AI-driven social robots emulating human behavior and abilities, how does this impact philosophical discussions in social ontology and philosophical anthropology, concerning sociality and human nature, identity and being?
    4. Prospects for advancing research in practical philosophy:  
      1. The simulatory capacities of AI-driven social robots will enhance our inclinations to attribute to them intentions, emotions, consciousness and other capacities relevant for moral status. In which way does this technical advance affect robo-ethics, machine ethics, the debate about “robot rights”?  
      2. AI-driven social robots will engender shifts in economic systems and political control structures. How will this influence current assumptions in  political philosophy and philosophy of culture?
    5. Prospects for the integration of Humanities and Social Science research into research and development of technology:
      1. Why will progress in AI and robotics make Humanities expertise indispensable?
      2. Precisely where and how do the theoretical implications  of AI-driven social robots create new relevance for philosophical (Humanities) expertise?
  2. Risks
    1. Sociocultural risks:
      1. Increase of social inequality or cultural homogenization: Could AI-driven social robots, manufactured on the background of selective cultural imaginaries, potentially exacerbate existing societal inequalities, such as those involving race, class, and gender? Is there a risk of cultural homogenization?
      2. Loss of existential orientation: As AI-driven robots increasingly can emulate and even surpass human capacities, how might this impact our cultural definitions of humanity and self-image?
      3. Devaluation of human labor: How might the integration of AI-driven social robots in the workplace change our conception of work, productivity, and the value of human labor?
    2. Socio-political risks:
      1. Further loss of political security in democracies: Could AI-driven social robots, if manipulated, pose threats to democratic processes, for instance, through dissemination of misinformation?
      2. Further loss of individual privacy and national security:  How might the use of AI-driven social robots impact individual privacy and national security, especially in contexts of data collection and surveillance?
      3. Loss of political control:  how will AI-driven economies affect current political control structures and shift societal power dynamics towards a techno-oligarchy?
    3. Psychological risks
      1. Emotional dependency: To what extent could AI-driven social robots engender an over-reliance or emotional dependency within human users? What could be the psychological ramifications of such a dependency?
      2. Negative effects of simulated sociality: As AI-driven social robots become more adept at simulating human interactions, how might this change our psychological responses to humans?  How will it affect social cognition? Will it amount to de-skilling in human-human social interactions?  If humans increasingly interact with social agents that have no accompanying phenomenal experiences, how will this loss of authenticity affect human mental health?
      3. Increased risks of subliminal manipulation: By combining the simulatory capacities of LLMs with the strong effects on social cognition exerted by physical social agents, will AI-driven social robots increase the risk for pre-conscious manipulation?
    4. Risk and technological literacy: Can we adjudicate already now the prospect of increased technological literacy? Technological literacy is often recommended as a method to reduce sociocultural impacts of robotics and AI—based on current results can we project the effects of technological literacy on risk assessment?
  1. Responsible methods:
    1. Evidence-based evaluation of promise: Which of the generally listed methods for responsible technology development (“co-design”, “value-centered design”, “design for values”, “integrative social robotics”, “responsible innovation”, standards, regulations, ethics codes, quality marks for companies, auditing, education for technological literacy of individuals) have greatest promise?
      1. Is there evidence, e.g. from other areas with urgent needs for policy, for the success of top-down methods by national regulation and auditing?
      2. Is there evidence, e.g. from other areas with urgent needs for policy, for the success of mid-level methods by means of ethics codes and quality marks for companies?
      3. Is there evidence, e.g. from other areas with urgent needs for policy, for the success of bottom-up methods by means of direct involvement of Humanities researchers into the R&D processes of technology development?
    2. Evidence-based evaluation of obstacles: What are the greatest obstacles for implementing methods for responsible robotics and AI?
    3. Evidence-based and principled arguments for the role of Humanities expertise in creating and implementing methods for responsible robotics and AI:
      1. Can, for this purpose, Humanities expertise be replaced by social science research and legislation?
      2. What are the specific elements of Humanities expertise and how do they relate to responsible technology development?