在垂直聯邦學習中,僅需簡單的轉換即可達到數據保護的目的。
Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning
December 16, 2024
作者: Andrei Semenov, Philip Zmushko, Alexander Pichugin, Aleksandr Beznosikov
cs.AI
摘要
垂直聯邦學習(VFL)旨在實現深度學習模型的協作訓練,同時保護隱私。然而,VFL 過程仍存在容易受惡意攻擊影響的組件。在我們的研究中,我們考慮特徵重建攻擊,這是一種常見的針對輸入數據泄露的風險。我們在理論上主張,沒有對數據的先驗分佈知識,特徵重建攻擊是無法成功的。因此,我們展示即使是簡單的模型架構變換也能顯著影響 VFL 過程中對輸入數據的保護。通過實驗結果證實這些發現,我們展示基於 MLP 模型的抗衡對抗最先進的特徵重建攻擊。
English
Vertical Federated Learning (VFL) aims to enable collaborative training of
deep learning models while maintaining privacy protection. However, the VFL
procedure still has components that are vulnerable to attacks by malicious
parties. In our work, we consider feature reconstruction attacks, a common risk
targeting input data compromise. We theoretically claim that feature
reconstruction attacks cannot succeed without knowledge of the prior
distribution on data. Consequently, we demonstrate that even simple model
architecture transformations can significantly impact the protection of input
data during VFL. Confirming these findings with experimental results, we show
that MLP-based models are resistant to state-of-the-art feature reconstruction
attacks.Summary
AI-Generated Summary