Simplifying Robot Task Planning with Large Language Models

Massachusetts Institute of Technology


We investigate using pre-trained LLMs to reason over the complete description of a planning problem and generate a simplified problem that excludes irrelevant objects to largely decrease planning burden and time of formal planners.

Video

Abstract

While traditional planners can quickly find plans in simple environments, task planning is slow in worlds with many objects. However, for a specific task, many of those objects are irrelevant and distract the planner during search. By removing irrelevant objects from the environment description, planning can be made more efficient. We investigate using pre-trained large language models (LLMs) to reason over the complete description of a planning problem and generate a simplified problem that excludes irrelevant objects. We test several different prompting techniques on multiple LLMs applied to problems consisting of hundreds of objects for four task planning domains.


The average planning runtime over all problems in each domain. The black dashed line indicates the planning runtime when planning with all objects.


The number of objects in each simplified problem for each domain. The baseline reports the number of objects in the full problems to help illustrate how many fewer objects are included in the simplified problems.

Related Links

This work is part of a broader research thread around language-instructed task and motion planning, which allows us to transform from natural language instruction into robot trajectory.

Other work on LLM-based robot task and motion planning from our lab include:

BibTeX

@article{chen2023autotamp,
  title={AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers},
  author={Chen, Yongchao and Arkin, Jacob and Zhang, Yang and Roy, Nicholas and Fan, Chuchu},
  journal={arXiv preprint arXiv:2306.06531},
  year={2023}
}
}