ChatPaper.aiChatPaper

DoraCycle:面向领域的统一生成模型在多模态循环中的适应性优化

DoraCycle: Domain-Oriented Adaptation of Unified Generative Model in Multimodal Cycles

March 5, 2025
作者: Rui Zhao, Weijia Mao, Mike Zheng Shou
cs.AI

摘要

将生成模型适配到特定领域,为满足专业化需求提供了一种有效解决方案。然而,适配某些复杂领域仍具挑战性,尤其是当这些领域需要大量配对数据以捕捉目标分布时。鉴于单一模态(如视觉或语言)的非配对数据更易获取,我们利用统一生成模型学习到的视觉与语言间的双向映射,实现在非配对数据上进行领域适配训练。具体而言,我们提出了DoraCycle,它整合了两个多模态循环:文本到图像再到文本,以及图像到文本再到图像。该模型通过在循环终点处计算交叉熵损失进行优化,两个终点共享同一模态。这促进了模型的自进化,无需依赖标注的文本-图像对。实验结果表明,对于不依赖于配对知识的任务,如风格化,DoraCycle能够仅使用非配对数据有效适配统一模型。对于涉及新配对知识的任务,如特定身份识别,结合少量配对图像-文本示例与大规模非配对数据,足以实现有效的领域导向适配。代码将发布于https://github.com/showlab/DoraCycle。
English
Adapting generative models to specific domains presents an effective solution for satisfying specialized requirements. However, adapting to some complex domains remains challenging, especially when these domains require substantial paired data to capture the targeted distributions. Since unpaired data from a single modality, such as vision or language, is more readily available, we utilize the bidirectional mappings between vision and language learned by the unified generative model to enable training on unpaired data for domain adaptation. Specifically, we propose DoraCycle, which integrates two multimodal cycles: text-to-image-to-text and image-to-text-to-image. The model is optimized through cross-entropy loss computed at the cycle endpoints, where both endpoints share the same modality. This facilitates self-evolution of the model without reliance on annotated text-image pairs. Experimental results demonstrate that for tasks independent of paired knowledge, such as stylization, DoraCycle can effectively adapt the unified model using only unpaired data. For tasks involving new paired knowledge, such as specific identities, a combination of a small set of paired image-text examples and larger-scale unpaired data is sufficient for effective domain-oriented adaptation. The code will be released at https://github.com/showlab/DoraCycle.

Summary

AI-Generated Summary

PDF162March 6, 2025