ChatPaper.aiChatPaper

RoCoTex:一種具有擴散模型的穩健一致紋理合成方法

RoCoTex: A Robust Method for Consistent Texture Synthesis with Diffusion Models

September 30, 2024
作者: Jangyeong Kim, Donggoo Kang, Junyoung Choi, Jeonga Wi, Junho Gwon, Jiun Bae, Dumim Yoon, Junghyun Han
cs.AI

摘要

最近,文本轉紋理生成引起了越來越多的關注,但現有方法常常存在著視角不一致、明顯接縫以及紋理與底層網格不對齊等問題。本文提出了一種強大的文本轉紋理方法,用於生成一致且無縫接的紋理,並與網格良好對齊。我們的方法利用最先進的2D擴散模型,包括SDXL和多個ControlNets,來捕捉生成紋理中的結構特徵和細微細節。該方法還採用了對稱視角合成策略,結合區域提示以增強視角一致性。此外,它引入了新穎的紋理混合和軟修補技術,顯著減少了接縫區域。大量實驗表明,我們的方法優於現有的最先進方法。
English
Text-to-texture generation has recently attracted increasing attention, but existing methods often suffer from the problems of view inconsistencies, apparent seams, and misalignment between textures and the underlying mesh. In this paper, we propose a robust text-to-texture method for generating consistent and seamless textures that are well aligned with the mesh. Our method leverages state-of-the-art 2D diffusion models, including SDXL and multiple ControlNets, to capture structural features and intricate details in the generated textures. The method also employs a symmetrical view synthesis strategy combined with regional prompts for enhancing view consistency. Additionally, it introduces novel texture blending and soft-inpainting techniques, which significantly reduce the seam regions. Extensive experiments demonstrate that our method outperforms existing state-of-the-art methods.

Summary

AI-Generated Summary

PDF183November 16, 2024