Abstract
This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD. To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL). This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices. To preserve privacy while not compromising accuracy, linearly mixed-up local samples are uploaded, and inversely mixed up across different devices at the server. Numerical evaluations show that Mix2FLD achieves up to 16.7% higher test accuracy while reducing convergence time by up to 18.8% under asymmetric uplink-downlink channels compared to FL.
Original language | English |
---|---|
Article number | 9121290 |
Pages (from-to) | 2211-2215 |
Number of pages | 5 |
Journal | IEEE Communications Letters |
Volume | 24 |
Issue number | 10 |
DOIs | |
Publication status | Published - 2020 Oct |
Bibliographical note
Publisher Copyright:© 1997-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Modelling and Simulation
- Computer Science Applications
- Electrical and Electronic Engineering