透過適應性感知物件插入,利用遮罩感知雙擴散
Affordance-Aware Object Insertion via Mask-Aware Dual Diffusion
December 19, 2024
作者: Jixuan He, Wanhua Li, Ye Liu, Junsik Kim, Donglai Wei, Hanspeter Pfister
cs.AI
摘要
作為常見的圖像編輯操作,圖像合成涉及將前景物件整合到背景場景中。在本文中,我們將Affordance概念的應用從以人為中心的圖像合成任務擴展到更一般的物件-場景合成框架,解決前景物件和背景場景之間的複雜互動。遵循Affordance原則,我們定義了考慮Affordance的物件插入任務,旨在通過各種位置提示無縫地將任何物件插入任何場景中。為了應對有限的數據問題並納入這一任務,我們構建了SAM-FB數據集,其中包含超過3,000個物件類別的3百萬多個示例。此外,我們提出了Mask-Aware Dual Diffusion(MADD)模型,該模型利用雙流架構同時對RGB圖像和插入遮罩進行降噪。通過在擴散過程中明確地對插入遮罩進行建模,MADD有效地促進了Affordance概念。廣泛的實驗結果表明,我們的方法優於最先進的方法,在野外圖像上表現出強大的泛化性能。請參考我們在 https://github.com/KaKituken/affordance-aware-any 上的代碼。
English
As a common image editing operation, image composition involves integrating
foreground objects into background scenes. In this paper, we expand the
application of the concept of Affordance from human-centered image composition
tasks to a more general object-scene composition framework, addressing the
complex interplay between foreground objects and background scenes. Following
the principle of Affordance, we define the affordance-aware object insertion
task, which aims to seamlessly insert any object into any scene with various
position prompts. To address the limited data issue and incorporate this task,
we constructed the SAM-FB dataset, which contains over 3 million examples
across more than 3,000 object categories. Furthermore, we propose the
Mask-Aware Dual Diffusion (MADD) model, which utilizes a dual-stream
architecture to simultaneously denoise the RGB image and the insertion mask. By
explicitly modeling the insertion mask in the diffusion process, MADD
effectively facilitates the notion of affordance. Extensive experimental
results show that our method outperforms the state-of-the-art methods and
exhibits strong generalization performance on in-the-wild images. Please refer
to our code on https://github.com/KaKituken/affordance-aware-any.Summary
AI-Generated Summary