为什么FL需要差分隐私直观上可能会觉得仅仅使用联邦学习就已经能够保护隐私了(因为联邦学习并不直接传输数据,而是传输梯度信息)。实际上,梯度信息已经被证明可以隐私,这两篇文章已经证明:[2].Melis etal. Exploiting Unintended Feature Leakage in Collaborative Learning. In lEEE Symposium on Security & Privacy,2019. [3].Hitaj et al. Deep models under theGAN: information leakage from collaborative deep http://learning.In ACM SIGSAC Conference防止梯度信息被泄露的方法有很多,目前主要有两种:1.基于安全多方计算的这个里面包含的方法很多,包括对梯度进行安全聚合算法进行聚合,或者进行同态加密运算,等等,文章以及方法很多。2.基于差分隐私的这个里面主要就是对梯度信息添加噪音,添加的噪音种类可能不同,但是目前主要就是拉普拉斯噪声和高斯噪声这两种。
1.Federated Learning with Differential Privacy:Algorithms and Performance Analysis 2.The Value of Collaboration in Convex Machine Learning with Differential Privacy
1.https://www.zhihu.com/column/c_1293586488769040384 2.Melis etal. Exploiting Unintended Feature Leakage in Collaborative Learning. In lEEE Symposium on Security & Privacy,2019. 3.Hitaj et al. Deep models under theGAN: information leakage from collaborative deep http://learning.In ACM SIGSAC Conference on 4.http://www.huaxiaozhuan.com/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/chapters/3_regularization.html 5.https://www.bookstack.cn/read/huaxiaozhuan-ai/spilt.4.d07cc9a8a1364f3d.md 6.http://yongxintong.group/static/talks/2019/federated-DP.pdf