ChatPaper.aiChatPaper

KV-Edit:无需训练的图像编辑技术,实现精准背景保留

KV-Edit: Training-Free Image Editing for Precise Background Preservation

February 24, 2025
作者: Tianrui Zhu, Shiyi Zhang, Jiawei Shao, Yansong Tang
cs.AI

摘要

在图像编辑任务中,背景一致性仍然是一个重大挑战。尽管已有诸多进展,现有方法仍需在保持与原图相似性和生成与目标一致的内容之间权衡。为此,我们提出了KV-Edit,一种无需训练的方法,它利用DiTs中的KV缓存来维持背景一致性,通过保留而非重新生成背景标记,避免了复杂机制或昂贵的训练需求,最终在用户指定区域内生成与背景无缝融合的新内容。我们进一步探讨了编辑过程中KV缓存的内存消耗,并采用无反转方法将空间复杂度优化至O(1)。该方法兼容任何基于DiT的生成模型,无需额外训练。实验表明,KV-Edit在背景和图像质量方面显著优于现有方法,甚至超越了基于训练的方法。项目网页详见https://xilluill.github.io/projectpages/KV-Edit。
English
Background consistency remains a significant challenge in image editing tasks. Despite extensive developments, existing works still face a trade-off between maintaining similarity to the original image and generating content that aligns with the target. Here, we propose KV-Edit, a training-free approach that uses KV cache in DiTs to maintain background consistency, where background tokens are preserved rather than regenerated, eliminating the need for complex mechanisms or expensive training, ultimately generating new content that seamlessly integrates with the background within user-provided regions. We further explore the memory consumption of the KV cache during editing and optimize the space complexity to O(1) using an inversion-free method. Our approach is compatible with any DiT-based generative model without additional training. Experiments demonstrate that KV-Edit significantly outperforms existing approaches in terms of both background and image quality, even surpassing training-based methods. Project webpage is available at https://xilluill.github.io/projectpages/KV-Edit

Summary

AI-Generated Summary

PDF333February 26, 2025