Large Language Models Are Neurosymbolic Reasoners



Fang, Meng ORCID: 0000-0001-6745-286X, Deng, Shilong, Zhang, Yudi, Shi, Zijing, Chen, Ling, Pechenizkiy, Mykola and Wang, Jun
(2024) Large Language Models Are Neurosymbolic Reasoners. In: The 38th Annual AAAI Conference on Artificial Intelligence, 2024-2-20 - 2024-2-7, VANCOUVER, CANADA.

[img] Text
AAAI24_accepted version-no-branding.pdf - Author Accepted Manuscript

Download (239kB) | Preview

Abstract

<jats:p>A wide range of real-world applications is characterized by their symbolic nature, necessitating a strong capability for symbolic reasoning. This paper investigates the potential application of Large Language Models (LLMs) as symbolic reasoners. We focus on text-based games, significant benchmarks for agents with natural language capabilities, particularly in symbolic tasks like math, map reading, sorting, and applying common sense in text-based worlds. To facilitate these agents, we propose an LLM agent designed to tackle symbolic challenges and achieve in-game objectives. We begin by initializing the LLM agent and informing it of its role. The agent then receives observations and a set of valid actions from the text-based games, along with a specific symbolic module. With these inputs, the LLM agent chooses an action and interacts with the game environments. Our experimental results demonstrate that our method significantly enhances the capability of LLMs as automated agents for symbolic reasoning, and our LLM agent is effective in text-based games involving symbolic tasks, achieving an average performance of 88% across all tasks.</jats:p>

Item Type: Conference or Workshop Item (Unspecified)
Uncontrolled Keywords: Clinical Research
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 16 Jan 2024 09:39
Last Modified: 14 Apr 2024 22:22
DOI: 10.1609/aaai.v38i16.29754
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3177855