While traditional planners can quickly find plans in simple environments, task planning is slow in worlds with many objects. However, for a specific task, many of those objects are irrelevant and distract the planner during search. By removing irrelevant objects from the environment description, planning can be made more efficient. We investigate using pre-trained large language models (LLMs) to reason over the complete description of a planning problem and generate a simplified problem that excludes irrelevant objects. We test several different prompting techniques on multiple LLMs applied to problems consisting of hundreds of objects for four task planning domains.
This work is part of a broader research thread around
Other work on LLM-based robot task and motion planning from our lab include:
@article{chen2023autotamp,
title={AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers},
author={Chen, Yongchao and Arkin, Jacob and Zhang, Yang and Roy, Nicholas and Fan, Chuchu},
journal={arXiv preprint arXiv:2306.06531},
year={2023}
}
}