MOVIS:增強室內場景多物體新視角合成

MOVIS: Enhancing Multi-Object Novel View Synthesis for Indoor Scenes

December 16, 2024
作者: Ruijie Lu, Yixin Chen, Junfeng Ni, Baoxiong Jia, Yu Liu, Diwen Wan, Gang Zeng, Siyuan Huang
cs.AI

摘要

重新利用預先訓練的擴散模型已被證實對於多對多視覺合成(NVS)是有效的。然而,這些方法大多僅限於單個物體;將這些方法直接應用於組合式多物體場景時,結果較差,特別是在新視角下物體放置不正確,形狀和外觀不一致。如何增強並系統評估這些模型的跨視圖一致性仍未被充分探討。為了解決這個問題,我們提出 MOVIS 來增強多物體 NVS 的擴散模型的結構感知,包括模型輸入、輔助任務和訓練策略。首先,我們將結構感知特徵(包括深度和物體遮罩)注入去噪 U-Net 中,以增強模型對物體實例及其空間關係的理解。其次,我們引入一個需要模型同時預測新視角物體遮罩的輔助任務,進一步提高模型在區分和放置物體方面的能力。最後,我們對擴散採樣過程進行深入分析,並在訓練期間精心設計了一個結構引導的時間步採樣調度器,平衡了全局物體放置和細緻細節恢復的學習。為了系統評估合成圖像的合理性,我們提出評估跨視圖一致性和新視角物體放置,並與現有的圖像級 NVS 指標一起。對具有挑戰性的合成和逼真數據集進行了大量實驗,證明我們的方法具有強大的泛化能力,並產生一致的新視角合成,突顯了其引導未來 3D 感知多物體 NVS 任務的潛力。
English
Repurposing pre-trained diffusion models has been proven to be effective for NVS. However, these methods are mostly limited to a single object; directly applying such methods to compositional multi-object scenarios yields inferior results, especially incorrect object placement and inconsistent shape and appearance under novel views. How to enhance and systematically evaluate the cross-view consistency of such models remains under-explored. To address this issue, we propose MOVIS to enhance the structural awareness of the view-conditioned diffusion model for multi-object NVS in terms of model inputs, auxiliary tasks, and training strategy. First, we inject structure-aware features, including depth and object mask, into the denoising U-Net to enhance the model's comprehension of object instances and their spatial relationships. Second, we introduce an auxiliary task requiring the model to simultaneously predict novel view object masks, further improving the model's capability in differentiating and placing objects. Finally, we conduct an in-depth analysis of the diffusion sampling process and carefully devise a structure-guided timestep sampling scheduler during training, which balances the learning of global object placement and fine-grained detail recovery. To systematically evaluate the plausibility of synthesized images, we propose to assess cross-view consistency and novel view object placement alongside existing image-level NVS metrics. Extensive experiments on challenging synthetic and realistic datasets demonstrate that our method exhibits strong generalization capabilities and produces consistent novel view synthesis, highlighting its potential to guide future 3D-aware multi-object NVS tasks.

Summary

AI-Generated Summary

PDF52December 17, 2024