个性化扩散模型对抗模仿的几乎零成本保护

Nearly Zero-Cost Protection Against Mimicry by Personalized Diffusion Models

December 16, 2024
作者: Namhyuk Ahn, KiYoon Yoo, Wonhyuk Ahn, Daesik Kim, Seung-Hun Nam
cs.AI

摘要

最近扩散模型的进展彻底改变了图像生成,但也带来了滥用的风险,比如复制艺术品或生成深度伪造。现有的图像保护方法虽然有效,但在保护效果、隐形性和延迟之间难以平衡,从而限制了实际应用。我们引入扰动预训练以减少延迟,并提出一种混合扰动方法,动态适应输入图像以最小化性能降级。我们的新型训练策略在多个VAE特征空间中计算保护损失,而推断时的自适应目标保护增强了鲁棒性和隐形性。实验证明,在改善隐形性和大幅减少推断时间的同时,具有可比较的保护性能。代码和演示可在https://webtoon.github.io/impasto找到。
English
Recent advancements in diffusion models revolutionize image generation but pose risks of misuse, such as replicating artworks or generating deepfakes. Existing image protection methods, though effective, struggle to balance protection efficacy, invisibility, and latency, thus limiting practical use. We introduce perturbation pre-training to reduce latency and propose a mixture-of-perturbations approach that dynamically adapts to input images to minimize performance degradation. Our novel training strategy computes protection loss across multiple VAE feature spaces, while adaptive targeted protection at inference enhances robustness and invisibility. Experiments show comparable protection performance with improved invisibility and drastically reduced inference time. The code and demo are available at https://webtoon.github.io/impasto

Summary

AI-Generated Summary

PDF12December 18, 2024