Mix2FLD: Downlink Federated Learning after Uplink Federated Distillation with Two-Way Mixup

Seungeun Oh, Jihong Park, Eunjeong Jeong, Hyesung Kim, Mehdi Bennis, Seong Lyun Kim

Research output: Contribution to journalArticlepeer-review

41 Citations (Scopus)

Abstract

This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD. To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL). This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices. To preserve privacy while not compromising accuracy, linearly mixed-up local samples are uploaded, and inversely mixed up across different devices at the server. Numerical evaluations show that Mix2FLD achieves up to 16.7% higher test accuracy while reducing convergence time by up to 18.8% under asymmetric uplink-downlink channels compared to FL.

Original languageEnglish
Article number9121290
Pages (from-to)2211-2215
Number of pages5
JournalIEEE Communications Letters
Volume24
Issue number10
DOIs
Publication statusPublished - 2020 Oct

Bibliographical note

Publisher Copyright:
© 1997-2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Modelling and Simulation
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Mix2FLD: Downlink Federated Learning after Uplink Federated Distillation with Two-Way Mixup'. Together they form a unique fingerprint.

Cite this