Prompt2Perturb(P2P):基于扩散的文本引导对乳腺超声图像的对抗攻击

Prompt2Perturb (P2P): Text-Guided Diffusion-Based Adversarial Attacks on Breast Ultrasound Images

December 13, 2024
作者: Yasamin Medghalchi, Moein Heidari, Clayton Allard, Leonid Sigal, Ilker Hacihaliloglu
cs.AI

摘要

深度神经网络(DNNs)在医学影像中提高乳腺癌诊断的潜力巨大。然而,这些模型极易受到对抗攻击的影响——微小、难以察觉的变化可能会误导分类器——这引发了对其可靠性和安全性的重要关注。传统攻击依赖于固定范数的扰动,与人类感知不一致。相比之下,基于扩散的攻击需要预先训练的模型,在这些模型不可用时需要大量数据,限制了在数据稀缺场景中的实际应用。然而,在医学影像领域,由于数据集的有限可用性,这通常是不可行的。借鉴最近在可学习提示方面的进展,我们提出了Prompt2Perturb(P2P),这是一种新颖的语言引导攻击方法,能够生成由文本说明驱动的有意义的攻击示例。在提示学习阶段,我们的方法利用文本编码器内的可学习提示创建微妙但有影响力的扰动,这些扰动保持难以察觉,同时引导模型朝向目标结果。与当前基于提示学习的方法相比,我们的P2P通过直接更新文本嵌入而脱颖而出,避免了需要重新训练扩散模型的必要性。此外,我们利用了一个发现,即仅优化早期的反向扩散步骤可以提高效率,同时确保生成的对抗示例包含微妙的噪音,从而在不引入明显伪影的情况下保持超声图像质量。我们展示了我们的方法在FID和LPIPS上优于三个乳腺超声数据集中的最先进攻击技术。此外,生成的图像在外观上更加自然,比现有的对抗攻击更加有效。我们的代码将公开提供,网址为https://github.com/yasamin-med/P2P。
English
Deep neural networks (DNNs) offer significant promise for improving breast cancer diagnosis in medical imaging. However, these models are highly susceptible to adversarial attacks--small, imperceptible changes that can mislead classifiers--raising critical concerns about their reliability and security. Traditional attacks rely on fixed-norm perturbations, misaligning with human perception. In contrast, diffusion-based attacks require pre-trained models, demanding substantial data when these models are unavailable, limiting practical use in data-scarce scenarios. In medical imaging, however, this is often unfeasible due to the limited availability of datasets. Building on recent advancements in learnable prompts, we propose Prompt2Perturb (P2P), a novel language-guided attack method capable of generating meaningful attack examples driven by text instructions. During the prompt learning phase, our approach leverages learnable prompts within the text encoder to create subtle, yet impactful, perturbations that remain imperceptible while guiding the model towards targeted outcomes. In contrast to current prompt learning-based approaches, our P2P stands out by directly updating text embeddings, avoiding the need for retraining diffusion models. Further, we leverage the finding that optimizing only the early reverse diffusion steps boosts efficiency while ensuring that the generated adversarial examples incorporate subtle noise, thus preserving ultrasound image quality without introducing noticeable artifacts. We show that our method outperforms state-of-the-art attack techniques across three breast ultrasound datasets in FID and LPIPS. Moreover, the generated images are both more natural in appearance and more effective compared to existing adversarial attacks. Our code will be publicly available https://github.com/yasamin-med/P2P.

Summary

AI-Generated Summary

PDF12December 16, 2024