ChatPaper.aiChatPaper

材料生成:通过扩散为任何3D物体生成材料

Material Anything: Generating Materials for Any 3D Object via Diffusion

November 22, 2024
作者: Xin Huang, Tengfei Wang, Ziwei Liu, Qing Wang
cs.AI

摘要

我们提出了Material Anything,这是一个完全自动化的统一扩散框架,旨在为3D物体生成基于物理的材质。与现有依赖复杂流程或特定优化的方法不同,Material Anything提供了一个强大的端到端解决方案,适用于不同光照条件下的物体。我们的方法利用一个经过预训练的图像扩散模型,增强了三头架构和渲染损失,以提高稳定性和材质质量。此外,我们引入了置信度蒙版作为扩散模型内的动态开关,使其能够有效处理在不同光照条件下的带纹理和无纹理物体。通过采用由这些置信度蒙版引导的渐进材质生成策略,再加上UV空间材质细化器,我们的方法确保了一致、适用于UV的材质输出。大量实验证明,我们的方法在各种物体类别和光照条件下均优于现有方法。
English
We present Material Anything, a fully-automated, unified diffusion framework designed to generate physically-based materials for 3D objects. Unlike existing methods that rely on complex pipelines or case-specific optimizations, Material Anything offers a robust, end-to-end solution adaptable to objects under diverse lighting conditions. Our approach leverages a pre-trained image diffusion model, enhanced with a triple-head architecture and rendering loss to improve stability and material quality. Additionally, we introduce confidence masks as a dynamic switcher within the diffusion model, enabling it to effectively handle both textured and texture-less objects across varying lighting conditions. By employing a progressive material generation strategy guided by these confidence masks, along with a UV-space material refiner, our method ensures consistent, UV-ready material outputs. Extensive experiments demonstrate our approach outperforms existing methods across a wide range of object categories and lighting conditions.

Summary

AI-Generated Summary

PDF403November 26, 2024