Balancing Privacy and Performance in Federated Learning: An Empirical Study of Hybrid Privacy Mechanisms

Authors

  • Arvind Kumar, Dr. Umesh Prasad

Keywords:

Federated Learning; Privacy Preservation; Hybrid Cryptography; Differential Privacy; Secure Aggregation; Performance Trade-offs

Abstract

Federated learning has emerged as a promising paradigm for collaborative machine learning that enables multiple clients to jointly train models without centralizing sensitive data. While this decentralized approach significantly reduces direct data exposure, it does not inherently guarantee privacy. Gradients, model updates, and trained parameters have been shown to leak sensitive information through inference and reconstruction attacks. To address these risks, a range of privacy-preserving techniques—such as cryptographic protection and statistical noise injection—have been proposed. However, these methods often introduce substantial trade-offs in terms of model accuracy, communication efficiency, and computational overhead.

This paper presents an empirical study that systematically examines the balance between privacy and performance in federated learning systems employing hybrid privacy mechanisms. By combining secure aggregation, partial homomorphic encryption, and differential privacy, the study evaluates how layered privacy defenses influence learning accuracy, communication cost, computation overhead, and resistance to privacy leakage. Experimental results across multiple configurations demonstrate that hybrid mechanisms significantly enhance privacy while maintaining acceptable learning performance. The findings highlight that privacy and utility need not be mutually exclusive, provided that privacy mechanisms are carefully integrated and empirically optimized.

References

Bonawitz, K., et al. (2017). Practical secure aggregation for privacy-preserving machine learning. ACM CCS.

McMahan, B., et al. (2017). Communication-efficient learning of deep networks from decentralized data. AISTATS.

Dwork, C. (2006). Differential privacy. ICALP.

Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends.

Zhu, L., Liu, Z., & Han, S. (2019). Deep leakage from gradients. NeurIPS.

Hitaj, B., Ateniese, G., & Perez-Cruz, F. (2017). Information leakage from collaborative learning. ACM CCS.

Kairouz, P., et al. (2021). Advances and open problems in federated learning. FTML.

Li, T., et al. (2020). Federated learning: Challenges and future directions. IEEE SPM.

Yang, Q., et al. (2019). Federated machine learning: Concept and applications. ACM TIST.

Geyer, R. C., et al. (2017). Differentially private federated learning. NeurIPS Workshop.

Abadi, M., et al. (2016). Deep learning with differential privacy. ACM CCS.

Papernot, N., et al. (2018). Security and privacy in machine learning. IEEE EuroS&P.

Melis, L., et al. (2019). Feature leakage in collaborative learning. IEEE S&P.

Nasr, M., et al. (2019). Comprehensive privacy analysis of deep learning. IEEE S&P.

Bagdasaryan, E., et al. (2020). Backdoor attacks in federated learning. AISTATS.

Gentry, C. (2009). Fully homomorphic encryption. STOC.

Aono, Y., et al. (2017). Privacy-preserving deep learning via homomorphic encryption. IEEE TIFS.

Shokri, R., & Shmatikov, V. (2015). Privacy-preserving deep learning. ACM CCS.

Sun, X., et al. (2021). Secure aggregation for federated learning: A survey. IEEE CST.

Lyu, L., et al. (2020). Threats to federated learning. arXiv.

Zhao, Y., et al. (2018). Federated learning with non-IID data. arXiv.

Hard, A., et al. (2018). Federated learning for mobile keyboards. arXiv.

Rieke, N., et al. (2020). Federated learning in digital health. npj Digital Medicine.

Brisimi, T., et al. (2018). Federated learning of electronic health records. IJMI.

Xu, J., et al. (2021). Federated learning for healthcare informatics. JBI.

Xiong, J., et al. (2020). Privacy-preserving distributed learning. IEEE TIFS.

Al-Rubaie, M., & Chang, J. M. (2019). Privacy-preserving machine learning. IEEE S&P.

Liu, Y., et al. (2022). Gradient leakage defenses in FL. FGCS.

Wang, S., et al. (2020). Adaptive federated learning. IEEE TPAMI.

Chen, T., et al. (2020). Practical federated learning. ACM Computing Surveys.

Mothukuri, V., et al. (2021). A survey on security in federated learning. ACM CSUR.

Truex, S., et al. (2019). Hybrid privacy in federated learning. AISec.

Li, X., et al. (2020). Privacy-preserving medical image analysis. Medical Image Analysis.

Yu, H., et al. (2021). Federated learning with encryption. IEEE IoT Journal.

Zhang, J., et al. (2020). Communication-efficient secure aggregation. IEEE TDSC.

Liu, D., et al. (2021). Privacy-aware federated analytics. Knowledge-Based Systems.

Nguyen, D., et al. (2022). Federated learning in edge computing. Computer Networks.

Chen, M., et al. (2020). Federated learning systems overview. IEEE Network.

Wang, Z., et al. (2021). Privacy attacks and defenses in FL. Information Sciences.

Zhou, Y., et al. (2022). Hybrid privacy-preserving federated learning. Neurocomputing.

Downloads

How to Cite

Arvind Kumar, Dr. Umesh Prasad. (2025). Balancing Privacy and Performance in Federated Learning: An Empirical Study of Hybrid Privacy Mechanisms. International Journal of Research & Technology, 13(3), 676–682. Retrieved from https://ijrt.org/j/article/view/897

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.