ChatPaper.aiChatPaper

ObjectMover:基于视频先验的生成式物体移动

ObjectMover: Generative Object Movement with Video Prior

March 11, 2025
作者: Xin Yu, Tianyu Wang, Soo Ye Kim, Paul Guerrero, Xi Chen, Qing Liu, Zhe Lin, Xiaojuan Qi
cs.AI

摘要

看似简单,将图像中的物体移动到另一位置实际上是一项极具挑战性的图像编辑任务,它需要重新协调光照、根据视角调整姿态、精确填充被遮挡区域,并确保阴影与反射的同步一致性,同时保持物体身份不变。本文提出了ObjectMover,一种能够在高度复杂场景中执行物体移动的生成模型。我们的核心见解是将此任务建模为序列到序列问题,并通过微调视频生成模型,利用其在视频帧间一致物体生成方面的知识。我们展示了采用此方法后,模型能够适应复杂的现实场景,处理极端光照协调与物体效果移动。鉴于缺乏大规模物体移动数据,我们利用现代游戏引擎构建了一个数据生成管道,以合成高质量的数据对。此外,我们提出了一种多任务学习策略,通过在真实世界视频数据上训练,提升模型的泛化能力。大量实验证明,ObjectMover取得了卓越成果,并能很好地适应现实世界场景。
English
Simple as it seems, moving an object to another location within an image is, in fact, a challenging image-editing task that requires re-harmonizing the lighting, adjusting the pose based on perspective, accurately filling occluded regions, and ensuring coherent synchronization of shadows and reflections while maintaining the object identity. In this paper, we present ObjectMover, a generative model that can perform object movement in highly challenging scenes. Our key insight is that we model this task as a sequence-to-sequence problem and fine-tune a video generation model to leverage its knowledge of consistent object generation across video frames. We show that with this approach, our model is able to adjust to complex real-world scenarios, handling extreme lighting harmonization and object effect movement. As large-scale data for object movement are unavailable, we construct a data generation pipeline using a modern game engine to synthesize high-quality data pairs. We further propose a multi-task learning strategy that enables training on real-world video data to improve the model generalization. Through extensive experiments, we demonstrate that ObjectMover achieves outstanding results and adapts well to real-world scenarios.

Summary

AI-Generated Summary

PDF35March 12, 2025