A Quantitative Study of Privacy-Preserving Techniques in Federated Learning for Distributed Systems

Authors

  • Prasoon T Kumar India Author

Keywords:

Federated learning, privacy-preserving techniques, secure multi-party computation, homomorphic encryption, trusted execution environments

Abstract

Federated Learning (FL) has emerged as a transformative approach for collaborative learning in distributed systems, allowing data to remain decentralized while enabling joint model training. However, privacy concerns present significant challenges in ensuring secure and trustworthy implementations. This study conducts a quantitative analysis of privacy-preserving techniques in FL, categorizing and evaluating mechanisms such as differential privacy, secure multi-party computation, homomorphic encryption, and trusted execution environments. A systematic examination of their trade-offs in terms of performance, scalability, and resilience to adversarial attacks is presented. Through a critical synthesis of prior research, this paper provides a comprehensive framework for assessing privacy techniques, offering insights into their application across diverse distributed systems. The findings aim to inform researchers and practitioners in selecting optimal approaches for privacy-preserving federated learning.

References

McMahan, Brendan, et al. "Communication-Efficient Learning of Deep Networks from Decentralized Data." Artificial Intelligence and Statistics, PMLR, 2017, pp. 1273–1282. Link.

Bonawitz, Keith, et al. "Practical Secure Aggregation for Privacy-Preserving Machine Learning." Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2017, pp. 1175–1191.

Dwork, Cynthia, and Aaron Roth. "The Algorithmic Foundations of Differential Privacy." Foundations and Trends in Theoretical Computer Science, vol. 9, no. 3–4, 2014, pp. 211–407.

Shokri, Reza, and Vitaly Shmatikov. "Privacy-Preserving Deep Learning." Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, ACM, 2015, pp. 1310–1321.

Abadi, Martin, et al. "Deep Learning with Differential Privacy." Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2016, pp. 308–318.

Gentry, Craig. "Fully Homomorphic Encryption Using Ideal Lattices." Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, ACM, 2009, pp. 169–178.

Hardy, Stephen, et al. "Private Federated Learning on Vertically Partitioned Data via Entity Resolution and Additively Homomorphic Encryption." arXiv preprint, 2017. arXiv:1711.10677.

Phong, Le Truong, et al. "Privacy-Preserving Deep Learning via Additively Homomorphic Encryption." IEEE Transactions on Information Forensics and Security, vol. 13, no. 5, 2018, pp. 1333–1345.

Melis, Luca, et al. "Exploiting Unintended Feature Leakage in Collaborative Learning." 2019 IEEE Symposium on Security and Privacy (SP), IEEE, 2019, pp. 691–706.

Trask, Andrew, et al. "Beyond Federated Learning: Collaborative AI Learning Using Decentralized and Privacy-Preserving Protocols." arXiv preprint, 2019.

Published

2023-03-18

How to Cite

Prasoon T Kumar. (2023). A Quantitative Study of Privacy-Preserving Techniques in Federated Learning for Distributed Systems. International Journal of Artificial Intelligence, 4(1), 1-4. https://ijai.in/index.php/home/article/view/IJAI.04.01.001