在纵向联邦学习中,仅需简单的转换即可实现数据保护

Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning

December 16, 2024
作者: Andrei Semenov, Philip Zmushko, Alexander Pichugin, Aleksandr Beznosikov
cs.AI

摘要

纵向联邦学习(VFL)旨在实现深度学习模型的协作训练,同时保护隐私。然而,VFL过程仍然存在容易受到恶意攻击的组件。在我们的研究中,我们考虑了特征重建攻击,这是一种常见的针对输入数据泄露的风险。我们在理论上认为,特征重建攻击在没有数据先验分布知识的情况下是不可能成功的。因此,我们证明即使是简单的模型架构转换也可以显著影响在VFL期间对输入数据的保护。通过实验结果证实这些发现,我们展示基于MLP的模型对最先进的特征重建攻击具有抵抗力。
English
Vertical Federated Learning (VFL) aims to enable collaborative training of deep learning models while maintaining privacy protection. However, the VFL procedure still has components that are vulnerable to attacks by malicious parties. In our work, we consider feature reconstruction attacks, a common risk targeting input data compromise. We theoretically claim that feature reconstruction attacks cannot succeed without knowledge of the prior distribution on data. Consequently, we demonstrate that even simple model architecture transformations can significantly impact the protection of input data during VFL. Confirming these findings with experimental results, we show that MLP-based models are resistant to state-of-the-art feature reconstruction attacks.

Summary

AI-Generated Summary

PDF22December 17, 2024