MIDI:單圖像到3D場景生成的多實例擴散
MIDI: Multi-Instance Diffusion for Single Image to 3D Scene Generation
December 4, 2024
作者: Zehuan Huang, Yuan-Chen Guo, Xingqiao An, Yunhan Yang, Yangguang Li, Zi-Xin Zou, Ding Liang, Xihui Liu, Yan-Pei Cao, Lu Sheng
cs.AI
摘要
本文介紹了MIDI,一種從單張圖像生成組合式3D場景的新範式。與現有依賴重建或檢索技術的方法或最近利用多階段逐個對象生成的方法不同,MIDI將預訓練的圖像到3D對象生成模型擴展到多實例擴散模型,實現多個3D實例的同時生成,具有準確的空間關係和高泛化能力。在其核心,MIDI包含一種新穎的多實例注意機制,有效地捕捉對象之間的相互作用和空間一致性,直接在生成過程中進行,無需複雜的多步驟過程。該方法利用部分對象圖像和全局場景上下文作為輸入,在3D生成過程中直接對對象完成進行建模。在訓練期間,我們有效地監督3D實例之間的交互作用,使用有限量的場景級數據,同時將單個對象數據納入正則化,從而保持預訓練的泛化能力。MIDI在圖像到場景生成方面展示了最先進的性能,通過對合成數據、現實世界場景數據以及由文本到圖像擴散模型生成的風格化場景圖像進行評估進行驗證。
English
This paper introduces MIDI, a novel paradigm for compositional 3D scene
generation from a single image. Unlike existing methods that rely on
reconstruction or retrieval techniques or recent approaches that employ
multi-stage object-by-object generation, MIDI extends pre-trained image-to-3D
object generation models to multi-instance diffusion models, enabling the
simultaneous generation of multiple 3D instances with accurate spatial
relationships and high generalizability. At its core, MIDI incorporates a novel
multi-instance attention mechanism, that effectively captures inter-object
interactions and spatial coherence directly within the generation process,
without the need for complex multi-step processes. The method utilizes partial
object images and global scene context as inputs, directly modeling object
completion during 3D generation. During training, we effectively supervise the
interactions between 3D instances using a limited amount of scene-level data,
while incorporating single-object data for regularization, thereby maintaining
the pre-trained generalization ability. MIDI demonstrates state-of-the-art
performance in image-to-scene generation, validated through evaluations on
synthetic data, real-world scene data, and stylized scene images generated by
text-to-image diffusion models.Summary
AI-Generated Summary