修改即消失:個人生物識別防禦對抗惡意生成編輯
Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing
November 25, 2024
作者: Hanhui Wang, Yihua Zhang, Ruizheng Bai, Yue Zhao, Sijia Liu, Zhengzhong Tu
cs.AI
摘要
最近擴散模型的進步使生成式圖像編輯更易於使用,使創意編輯變得更加容易,但也引發了道德問題,特別是針對威脅隱私和身份安全的惡意對人像進行的編輯。現有的保護方法主要依賴對抗性干擾來抵消編輯,但常常無法應對各種不同的編輯要求。我們提出了FaceLock,一種新穎的人像保護方法,它優化對抗性干擾以破壞或顯著改變生物特徵信息,使編輯輸出在生物特徵上無法被識別。FaceLock將人臉識別和視覺感知整合到干擾優化中,以提供對各種編輯嘗試的強大保護。我們還強調了常用評估指標中存在的缺陷,並揭示了它們如何被操縱,強調了對保護的可靠評估的需求。實驗表明,FaceLock在防禦惡意編輯方面優於基線方法,並且對淨化技術具有很強的韌性。消融研究證實了其穩定性,以及在基於擴散的編輯算法中的廣泛應用性。我們的工作推動了生物特徵防禦的發展,為圖像編輯中的隱私保護實踐奠定了基礎。代碼可在以下鏈接找到:https://github.com/taco-group/FaceLock。
English
Recent advancements in diffusion models have made generative image editing
more accessible, enabling creative edits but raising ethical concerns,
particularly regarding malicious edits to human portraits that threaten privacy
and identity security. Existing protection methods primarily rely on
adversarial perturbations to nullify edits but often fail against diverse
editing requests. We propose FaceLock, a novel approach to portrait protection
that optimizes adversarial perturbations to destroy or significantly alter
biometric information, rendering edited outputs biometrically unrecognizable.
FaceLock integrates facial recognition and visual perception into perturbation
optimization to provide robust protection against various editing attempts. We
also highlight flaws in commonly used evaluation metrics and reveal how they
can be manipulated, emphasizing the need for reliable assessments of
protection. Experiments show FaceLock outperforms baselines in defending
against malicious edits and is robust against purification techniques. Ablation
studies confirm its stability and broad applicability across diffusion-based
editing algorithms. Our work advances biometric defense and sets the foundation
for privacy-preserving practices in image editing. The code is available at:
https://github.com/taco-group/FaceLock.Summary
AI-Generated Summary