材料萬象:通過擴散生成任意3D物體的材料
Material Anything: Generating Materials for Any 3D Object via Diffusion
November 22, 2024
作者: Xin Huang, Tengfei Wang, Ziwei Liu, Qing Wang
cs.AI
摘要
我們提出了Material Anything,這是一個完全自動化的統一擴散框架,旨在為3D物體生成基於物理的材料。與現有依賴於複雜流程或特定案例優化的方法不同,Material Anything提供了一個強大的端到端解決方案,可適應不同照明條件下的物體。我們的方法利用了一個預先訓練的圖像擴散模型,通過三頭架構和渲染損失來提高穩定性和材料質量。此外,我們引入了置信度遮罩作為擴散模型內的動態切換器,使其能夠有效處理在不同照明條件下的紋理和無紋理物體。通過採用由這些置信度遮罩引導的漸進式材料生成策略,以及UV空間材料精煉器,我們的方法確保了一致的、準備好UV的材料輸出。大量實驗表明,我們的方法在各種物體類別和照明條件下均優於現有方法。
English
We present Material Anything, a fully-automated, unified diffusion framework
designed to generate physically-based materials for 3D objects. Unlike existing
methods that rely on complex pipelines or case-specific optimizations, Material
Anything offers a robust, end-to-end solution adaptable to objects under
diverse lighting conditions. Our approach leverages a pre-trained image
diffusion model, enhanced with a triple-head architecture and rendering loss to
improve stability and material quality. Additionally, we introduce confidence
masks as a dynamic switcher within the diffusion model, enabling it to
effectively handle both textured and texture-less objects across varying
lighting conditions. By employing a progressive material generation strategy
guided by these confidence masks, along with a UV-space material refiner, our
method ensures consistent, UV-ready material outputs. Extensive experiments
demonstrate our approach outperforms existing methods across a wide range of
object categories and lighting conditions.Summary
AI-Generated Summary