编辑即消失,我的面容不再停留:个人生物特征防御对抗恶意生成编辑
Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing
November 25, 2024
作者: Hanhui Wang, Yihua Zhang, Ruizheng Bai, Yue Zhao, Sijia Liu, Zhengzhong Tu
cs.AI
摘要
最近扩散模型的进展使生成图像编辑变得更加可访问,实现了创意编辑,但也引发了伦理关切,特别是涉及对人像进行恶意编辑,威胁隐私和身份安全的问题。现有的保护方法主要依赖于对抗性扰动来抵消编辑,但往往无法应对多样化的编辑请求。我们提出了一种名为FaceLock的新颖人像保护方法,通过优化对抗性扰动来破坏或显著改变生物特征信息,使编辑输出在生物特征上不可识别。FaceLock将人脸识别和视觉感知整合到扰动优化中,以提供对各种编辑尝试的强大保护。我们还指出了常用评估指标中的缺陷,并揭示了它们如何被操纵,强调了对保护的可靠评估的必要性。实验证明,FaceLock在防御恶意编辑方面优于基线,并且对净化技术具有稳健性。消融研究证实了其稳定性以及在基于扩散的编辑算法中的广泛适用性。我们的工作推动了生物特征防御的发展,并为图像编辑中的隐私保护实践奠定了基础。代码可在以下链接获取:https://github.com/taco-group/FaceLock。
English
Recent advancements in diffusion models have made generative image editing
more accessible, enabling creative edits but raising ethical concerns,
particularly regarding malicious edits to human portraits that threaten privacy
and identity security. Existing protection methods primarily rely on
adversarial perturbations to nullify edits but often fail against diverse
editing requests. We propose FaceLock, a novel approach to portrait protection
that optimizes adversarial perturbations to destroy or significantly alter
biometric information, rendering edited outputs biometrically unrecognizable.
FaceLock integrates facial recognition and visual perception into perturbation
optimization to provide robust protection against various editing attempts. We
also highlight flaws in commonly used evaluation metrics and reveal how they
can be manipulated, emphasizing the need for reliable assessments of
protection. Experiments show FaceLock outperforms baselines in defending
against malicious edits and is robust against purification techniques. Ablation
studies confirm its stability and broad applicability across diffusion-based
editing algorithms. Our work advances biometric defense and sets the foundation
for privacy-preserving practices in image editing. The code is available at:
https://github.com/taco-group/FaceLock.Summary
AI-Generated Summary