A efficient and robust privacy-preserving framework for cross-device federated learning

Bibliographic Details
Title: A efficient and robust privacy-preserving framework for cross-device federated learning
Authors: Weidong Du, Min Li, Liqiang Wu, Yiliang Han, Tanping Zhou, Xiaoyuan Yang
Source: Complex & Intelligent Systems, Vol 9, Iss 5, Pp 4923-4937 (2023)
Publisher Information: Springer, 2023.
Publication Year: 2023
Collection: LCC:Electronic computers. Computer science
LCC:Information technology
Subject Terms: Cross-device federated learning, Privacy-preserving, Robust, Efficient, Collusion-attack resistant, Post-quantum security, Electronic computers. Computer science, QA75.5-76.95, Information technology, T58.5-58.64
More Details: Abstract To ensure no private information is leaked in the aggregation phase in federated learning (FL), many frameworks use homomorphic encryption (HE) to mask local model updates. However, the heavy overheads of these frameworks make them unsuitable for cross-device FL, where the clients are a huge number of mobile and edge devices with limited computing resources. Even worse, some of them also fail to manage the dynamic changes of clients. To overcome these shortcomings, we propose a threshold multi-key HE scheme tMK-CKKS and design an efficient and robust privacy-preserving FL framework. Robustness means that our framework allows clients to join in or drop out during the training process. Besides, because our tMK-CKKS scheme can pack multiple messages in a single ciphertext, our framework significantly reduces the computation and communication overhead. Moreover, the threshold mechanism in tMK-CKKS ensures that our framework can resist collusion attacks between the server and no more than t (threshold value) curious internal clients. Finally, we implement our framework in FedML and conduct extensive experiments to evaluate our framework. Utility evaluations on 6 benchmark datasets show that our framework can protect privacy without sacrificing the model accuracy. Efficiency evaluations on 4 typical deep learning models demonstrate that: our framework can speed up the computation by at least 1.21 $$\times $$ × over xMK-CKKS-based framework, 15.84 $$\times $$ × over Batchcrypt-based framework, and 20.30 $$\times $$ × over CRT-Paillier-based framework. Our framework can reduce the communication burden by at least 8.61 MB over Batchcrypt-based framework, 35.36 MB over xMK-CKKS-based framework and 42.58 MB over CRT-Paillier-based framework. The advantages in both computation and communication expand with the size of deep learning models.
Document Type: article
File Description: electronic resource
Language: English
ISSN: 2199-4536
2198-6053
Relation: https://doaj.org/toc/2199-4536; https://doaj.org/toc/2198-6053
DOI: 10.1007/s40747-023-00978-9
Access URL: https://doaj.org/article/9a03faa22eca4411aad3450f616da97f
Accession Number: edsdoj.9a03faa22eca4411aad3450f616da97f
Database: Directory of Open Access Journals
More Details
ISSN:21994536
21986053
DOI:10.1007/s40747-023-00978-9
Published in:Complex & Intelligent Systems
Language:English