一种抵抗梯度反转攻击的新联邦学习框架
A New Federated Learning Framework Against Gradient Inversion Attacks
December 10, 2024
作者: Pengxin Guo, Shuang Zeng, Wenhao Chen, Xiaodan Zhang, Weihong Ren, Yuyin Zhou, Liangqiong Qu
cs.AI
摘要
联邦学习(FL)旨在通过使客户共同训练机器学习模型而无需共享原始数据来保护数据隐私。然而,最近的研究表明,在FL过程中交换的信息容易受到梯度反转攻击(GIA)的影响,因此,为了防范此类攻击,已经将各种保护隐私的方法集成到FL中,如安全多方计算(SMC)、同态加密(HE)和差分隐私(DP)。尽管这些方法能够保护数据隐私,但它们固有地涉及相当大的隐私-效用权衡。通过重新审视FL中在GIA下的隐私暴露关键,即频繁共享包含私人数据的模型梯度,我们采用新的视角设计了一种新颖的隐私保护FL框架,有效地“打破了直接连接”共享参数与本地私人数据之间的联系以抵御GIA。具体而言,我们提出了一种超网络联邦学习(HyperFL)框架,利用超网络生成本地模型的参数,只有超网络参数上传到服务器进行聚合。理论分析展示了所提出的HyperFL的收敛速度,而广泛的实验结果显示了HyperFL的隐私保护能力和可比较的性能。代码可在https://github.com/Pengxin-Guo/HyperFL 找到。
English
Federated Learning (FL) aims to protect data privacy by enabling clients to
collectively train machine learning models without sharing their raw data.
However, recent studies demonstrate that information exchanged during FL is
subject to Gradient Inversion Attacks (GIA) and, consequently, a variety of
privacy-preserving methods have been integrated into FL to thwart such attacks,
such as Secure Multi-party Computing (SMC), Homomorphic Encryption (HE), and
Differential Privacy (DP). Despite their ability to protect data privacy, these
approaches inherently involve substantial privacy-utility trade-offs. By
revisiting the key to privacy exposure in FL under GIA, which lies in the
frequent sharing of model gradients that contain private data, we take a new
perspective by designing a novel privacy preserve FL framework that effectively
``breaks the direct connection'' between the shared parameters and the local
private data to defend against GIA. Specifically, we propose a Hypernetwork
Federated Learning (HyperFL) framework that utilizes hypernetworks to generate
the parameters of the local model and only the hypernetwork parameters are
uploaded to the server for aggregation. Theoretical analyses demonstrate the
convergence rate of the proposed HyperFL, while extensive experimental results
show the privacy-preserving capability and comparable performance of HyperFL.
Code is available at https://github.com/Pengxin-Guo/HyperFL.Summary
AI-Generated Summary