WorldSimBench:朝向將視頻生成模型打造成世界模擬器
WorldSimBench: Towards Video Generation Models as World Simulators
October 23, 2024
作者: Yiran Qin, Zhelun Shi, Jiwen Yu, Xijun Wang, Enshen Zhou, Lijun Li, Zhenfei Yin, Xihui Liu, Lu Sheng, Jing Shao, Lei Bai, Wanli Ouyang, Ruimao Zhang
cs.AI
摘要
最近對預測模型的進展展示了在預測物體和場景未來狀態方面的卓越能力。然而,基於固有特徵的分類不足仍在阻礙預測模型發展的進步。此外,現有基準無法有效評估具有更高能力、高度具體表現的預測模型。在這項工作中,我們將預測模型的功能分類為一個層次結構,並通過提出一個名為WorldSimBench的雙重評估框架,邁出了評估世界模擬器的第一步。WorldSimBench包括明確感知評估和隱含操作評估,包括從視覺角度的人類偏好評估和具體任務中的動作級評估,涵蓋三個具體表現場景:開放式具體環境、自主駕駛和機器人操作。在明確感知評估中,我們引入了HF-具體化數據集,這是一個基於細粒度人類反饋的視頻評估數據集,我們用它來訓練一個與人類感知一致並明確評估世界模擬器視覺保真度的人類偏好評估器。在隱含操作評估中,我們通過評估世界模擬器生成的情境感知視頻是否能在動態環境中準確轉換為正確的控制信號,來評估視頻-動作一致性。我們的全面評估提供了關鍵見解,可以推動視頻生成模型的進一步創新,將世界模擬器定位為走向具體化人工智能的重要進展。
English
Recent advancements in predictive models have demonstrated exceptional
capabilities in predicting the future state of objects and scenes. However, the
lack of categorization based on inherent characteristics continues to hinder
the progress of predictive model development. Additionally, existing benchmarks
are unable to effectively evaluate higher-capability, highly embodied
predictive models from an embodied perspective. In this work, we classify the
functionalities of predictive models into a hierarchy and take the first step
in evaluating World Simulators by proposing a dual evaluation framework called
WorldSimBench. WorldSimBench includes Explicit Perceptual Evaluation and
Implicit Manipulative Evaluation, encompassing human preference assessments
from the visual perspective and action-level evaluations in embodied tasks,
covering three representative embodied scenarios: Open-Ended Embodied
Environment, Autonomous, Driving, and Robot Manipulation. In the Explicit
Perceptual Evaluation, we introduce the HF-Embodied Dataset, a video assessment
dataset based on fine-grained human feedback, which we use to train a Human
Preference Evaluator that aligns with human perception and explicitly assesses
the visual fidelity of World Simulators. In the Implicit Manipulative
Evaluation, we assess the video-action consistency of World Simulators by
evaluating whether the generated situation-aware video can be accurately
translated into the correct control signals in dynamic environments. Our
comprehensive evaluation offers key insights that can drive further innovation
in video generation models, positioning World Simulators as a pivotal
advancement toward embodied artificial intelligence.Summary
AI-Generated Summary