Abstract
In this paper, we show how relative location prompting can improve how Large Language Models (LLMs) such as ChatGPT interact with Spatial Reasoning (SR) tasks. LLMs likely have difficulty with SR tasks due to being designed for language based tasks, whilst SR tasks are more visual (Lee, 2023). By reading papers on Self-Ask (Press et al., 2023) and Chain-of-thought (Jason Zhanshun Wei et al., 2022) and recognizing their success, we hypothesized that prompting techniques similar to the ones mentioned would be effective in increasing the success-rate of the LLM agent in our Spatial Reasoning task. Taking these two factors into account, the solution we came up with was to turn the multi-step interaction-based SR tasks into more simple tasks by prompting the AI agent with its relative location to the target location after each step taken. We set up a 2D 5x5 grid world environment to test the LLM agent against, and by setting up a separate environment which includes relative location prompting, as well as a random environment, we saw the difference between the three success-rates. We collected and analysed data of 300 trials total (100 trials on each of the 3 environments) to conclude that relative location prompting does improve the success rate of LLMs when tackling SR tasks. This showed that by converting SR tasks into text, and by breaking down large tasks into smaller tasks, AI can solve SR problems better. Future studies should investigate other types of SR tasks, such as folding scenarios, and test out different prompting methods to determine the best one.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have