一個抵抗梯度反轉攻擊的新聯邦學習框架
A New Federated Learning Framework Against Gradient Inversion Attacks
December 10, 2024
作者: Pengxin Guo, Shuang Zeng, Wenhao Chen, Xiaodan Zhang, Weihong Ren, Yuyin Zhou, Liangqiong Qu
cs.AI
摘要
聯邦學習(FL)旨在保護數據隱私,使客戶能夠共同訓練機器學習模型,而無需共享原始數據。然而,最近的研究表明,在FL期間交換的信息容易受到梯度反轉攻擊(GIA)的影響,因此在FL中已整合了各種保護隱私的方法來防止此類攻擊,例如安全多方計算(SMC)、同態加密(HE)和差分隱私(DP)。儘管這些方法能夠保護數據隱私,但它們固有地涉及相當大的隱私-效用平衡。通過重新檢視FL在GIA下的隱私曝光關鍵,即在包含私人數據的模型梯度之間進行頻繁共享,我們採用新的角度設計了一個新穎的隱私保護FL框架,有效地“切斷”了共享參數與本地私人數據之間的直接聯繫,以抵禦GIA。具體而言,我們提出了一個利用超網絡生成本地模型參數的Hypernetwork Federated Learning(HyperFL)框架,只有超網絡參數被上傳到伺服器進行聚合。理論分析展示了所提出的HyperFL的收斂速度,而大量的實驗結果顯示了HyperFL的保護隱私能力和可比擬的性能。程式碼可在https://github.com/Pengxin-Guo/HyperFL找到。
English
Federated Learning (FL) aims to protect data privacy by enabling clients to
collectively train machine learning models without sharing their raw data.
However, recent studies demonstrate that information exchanged during FL is
subject to Gradient Inversion Attacks (GIA) and, consequently, a variety of
privacy-preserving methods have been integrated into FL to thwart such attacks,
such as Secure Multi-party Computing (SMC), Homomorphic Encryption (HE), and
Differential Privacy (DP). Despite their ability to protect data privacy, these
approaches inherently involve substantial privacy-utility trade-offs. By
revisiting the key to privacy exposure in FL under GIA, which lies in the
frequent sharing of model gradients that contain private data, we take a new
perspective by designing a novel privacy preserve FL framework that effectively
``breaks the direct connection'' between the shared parameters and the local
private data to defend against GIA. Specifically, we propose a Hypernetwork
Federated Learning (HyperFL) framework that utilizes hypernetworks to generate
the parameters of the local model and only the hypernetwork parameters are
uploaded to the server for aggregation. Theoretical analyses demonstrate the
convergence rate of the proposed HyperFL, while extensive experimental results
show the privacy-preserving capability and comparable performance of HyperFL.
Code is available at https://github.com/Pengxin-Guo/HyperFL.Summary
AI-Generated Summary