從互動中的回顧式學習
Retrospective Learning from Interactions
October 17, 2024
作者: Zizhao Chen, Mustafa Omer Gul, Yiwei Chen, Gloria Geng, Anne Wu, Yoav Artzi
cs.AI
摘要
大型語言模型(LLMs)與使用者之間的多輪互動自然包含隱式反饋信號。如果一個LLM對指令做出意外的回應,使用者可能會通過重新表達請求、表達挫折感或轉向另一個任務來表達這一點。這些信號與任務無關,並且佔據語言的一個相對受限制的子空間,使得LLM可以識別它們,即使在實際任務上失敗。這為從互動中持續學習提供了一個途徑,而無需額外的標註。我們介紹了ReSpect,一種通過回顧來從過去互動中學習這些信號的方法。我們在一個新的多模態互動場景中部署了ReSpect,在該場景中,人類指示一個LLM解決具有組合解空間的抽象推理任務。通過與人類的成千上萬次互動,我們展示了ReSpect如何逐漸將任務完成率從31%提高到82%,而無需任何外部標註。
English
Multi-turn interactions between large language models (LLMs) and users
naturally include implicit feedback signals. If an LLM responds in an
unexpected way to an instruction, the user is likely to signal it by rephrasing
the request, expressing frustration, or pivoting to an alternative task. Such
signals are task-independent and occupy a relatively constrained subspace of
language, allowing the LLM to identify them even if it fails on the actual
task. This creates an avenue for continually learning from interactions without
additional annotations. We introduce ReSpect, a method to learn from such
signals in past interactions via retrospection. We deploy ReSpect in a new
multimodal interaction scenario, where humans instruct an LLM to solve an
abstract reasoning task with a combinatorial solution space. Through thousands
of interactions with humans, we show how ReSpect gradually improves task
completion rate from 31% to 82%, all without any external annotation.Summary
AI-Generated Summary