ChatPaper.aiChatPaper

世界建模成就更优规划器:面向具身任务规划的双重偏好优化

World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning

March 13, 2025
作者: Siyin Wang, Zhaoye Fei, Qinyuan Cheng, Shiduo Zhang, Panpan Cai, Jinlan Fu, Xipeng Qiu
cs.AI

摘要

近期,大规模视觉语言模型(LVLMs)在具身任务规划方面展现出潜力,但仍面临依赖约束和效率等基础性挑战。现有方法要么仅优化动作选择,要么在推理过程中利用世界模型,却忽视了通过学习建模世界来增强规划能力的优势。我们提出了双重偏好优化(D^2PO),这是一种新的学习框架,通过偏好学习联合优化状态预测与动作选择,使LVLMs能够理解环境动态,从而提升规划效果。为了无需人工标注自动收集轨迹及逐步偏好数据,我们引入了一种树搜索机制,通过试错进行广泛探索。在VoTa-Bench上的大量实验表明,基于D^2PO的方法应用于Qwen2-VL(7B)、LLaVA-1.6(7B)及LLaMA-3.2(11B)时,显著优于现有方法及GPT-4o,以更高效的执行路径实现了更高的任务成功率。
English
Recent advances in large vision-language models (LVLMs) have shown promise for embodied task planning, yet they struggle with fundamental challenges like dependency constraints and efficiency. Existing approaches either solely optimize action selection or leverage world models during inference, overlooking the benefits of learning to model the world as a way to enhance planning capabilities. We propose Dual Preference Optimization (D^2PO), a new learning framework that jointly optimizes state prediction and action selection through preference learning, enabling LVLMs to understand environment dynamics for better planning. To automatically collect trajectories and stepwise preference data without human annotation, we introduce a tree search mechanism for extensive exploration via trial-and-error. Extensive experiments on VoTa-Bench demonstrate that our D^2PO-based method significantly outperforms existing methods and GPT-4o when applied to Qwen2-VL (7B), LLaVA-1.6 (7B), and LLaMA-3.2 (11B), achieving superior task success rates with more efficient execution paths.

Summary

AI-Generated Summary

PDF295March 14, 2025