AnyStory:走向文本到图像生成中的统一单一和多主题个性化

AnyStory: Towards Unified Single and Multiple Subject Personalization in Text-to-Image Generation

January 16, 2025
作者: Junjie He, Yuxiang Tuo, Binghui Chen, Chongyang Zhong, Yifeng Geng, Liefeng Bo
cs.AI

摘要

最近,大规模生成模型展示了出色的文本到图像生成能力。然而,在生成具有特定主题的高保真个性化图像方面仍然存在挑战,特别是涉及多个主题的情况。在本文中,我们提出了AnyStory,一种用于个性化主题生成的统一方法。AnyStory 不仅实现了单个主题的高保真个性化,还能够在涉及多个主题时实现高保真的个性化,而不会牺牲主题的保真度。具体来说,AnyStory 以“编码-路由”方式建模主题个性化问题。在编码步骤中,AnyStory 利用通用且强大的图像编码器,即 ReferenceNet,结合 CLIP 视觉编码器,实现对主题特征的高保真编码。在路由步骤中,AnyStory 利用解耦的实例感知主题路由器准确感知和预测潜在位置的对应主题在潜在空间中的位置,并引导主题条件的注入。详细的实验结果展示了我们的方法在保留主题细节、与文本描述对齐以及为多个主题个性化方面的出色性能。项目页面位于 https://aigcdesigngroup.github.io/AnyStory/。
English
Recently, large-scale generative models have demonstrated outstanding text-to-image generation capabilities. However, generating high-fidelity personalized images with specific subjects still presents challenges, especially in cases involving multiple subjects. In this paper, we propose AnyStory, a unified approach for personalized subject generation. AnyStory not only achieves high-fidelity personalization for single subjects, but also for multiple subjects, without sacrificing subject fidelity. Specifically, AnyStory models the subject personalization problem in an "encode-then-route" manner. In the encoding step, AnyStory utilizes a universal and powerful image encoder, i.e., ReferenceNet, in conjunction with CLIP vision encoder to achieve high-fidelity encoding of subject features. In the routing step, AnyStory utilizes a decoupled instance-aware subject router to accurately perceive and predict the potential location of the corresponding subject in the latent space, and guide the injection of subject conditions. Detailed experimental results demonstrate the excellent performance of our method in retaining subject details, aligning text descriptions, and personalizing for multiple subjects. The project page is at https://aigcdesigngroup.github.io/AnyStory/ .

Summary

AI-Generated Summary

PDF62January 17, 2025