Pre-trained and frozen LLMs can effectively map simple scene re-arrangement instructions to programs over a robot's
visuomotor functions through appropriate few-shot example prompting. To parse open-domain natural language and adapt to
a user's idiosyncratic procedures, not known during prompt engineering time, fixed prompts fall short. In this paper, we
introduce HELPER, an embodied agent equipped with as external memory of language-program pairs that parses free-form
human-robot dialogue into action programs through retrieval-augmented LLM prompting: relevant memories are retrieved
based on the current dialogue, instruction, correction or VLM description, and used as in-context prompt examples for
LLM querying. The memory is expanded during deployment to include pairs of user's language and action plans, to assist
future inferences and personalize them to the user's language and routines. HELPER sets a new state-of-the-art in the
TEACh benchmark in both Execution from Dialog History (EDH) and Trajectory from Dialogue (TfD), with 1.7x improvement
over the previous SOTA for TfD.
A key component of HELPER is its memory of language-program pairs to generate tailored prompts
for pretrained LLMs based on the current
The retrieved examples are added to the LLM prompt, which aids in parsing
diverse, and user-specific linguistic inputs for
planning, re-planning during failures, and
interpreting human feedback.
Household Task Execution from Messy Dialogue
We set a new state-of-the-art in the TEACh Trajectory from Dialogue (TfD) and Execution from Dialog History (EDH) benchmarks, where the agent is given a messy dialogue segment and is tasked to
infer the sequence of actions from RGB. HELPER improves
TfD task success by 1.7x and goal-condition success by 2.1x over existing works with minimal
Error correction demo
Gathering user feedback can improve a home
robot’s performance, but frequently requesting
feedback on a task can diminish the overall user experience. Thus, we enable HELPER to elicit sparse
user feedback only when it has completed execution of the program from the initial user input. HELPER improves an
additional 1.3X in task success when incorporating just two user feedbacks.
Demo: Clean all cookware with user feedback (skip to 0:41 for user feedback saying HELPER
missed cleaning the pot)
Demo: Make breakfast with user feedback (skip to 2:54 for user feedback saying did not put
tomato & lettuce slice on plate)
HELPER expands its
memory of programs with successful executions of user specific procedures; it then
recalls and adapts them in future interactions with the user, allowing for user-personalized references.
See our paper for more!
title = "Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models",
author = "Sarch, Gabriel and
Wu, Yue and
Tarr, Michael and
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
year = "2023"}