LocFedMix-SL: Localize, Federate, and Mix for Improved Scalability, Convergence, and Latency in Split Learning

Seungeun Oh, Jihong Park, Praneeth Vepakomma, Sihun Baek, Ramesh Raskar, Mehdi Bennis, Seong Lyun Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)


Split learning (SL) is a promising distributed learning framework that enables to utilize the huge data and parallel computing resources of mobile devices. SL is built upon a model-split architecture, wherein a server stores an upper model segment that is shared by different mobile clients storing its lower model segments. Without exchanging raw data, SL achieves high accuracy and fast convergence by only uploading smashed data from clients and downloading global gradients from the server. Nonetheless, the original implementation of SL sequentially serves multiple clients, incurring high latency with many clients. A parallel implementation of SL has great potential in reducing latency, yet existing parallel SL algorithms resort to compromising scalability and/or convergence speed. Motivated by this, the goal of this article is to develop a scalable parallel SL algorithm with fast convergence and low latency. As a first step, we identify that the fundamental bottleneck of existing parallel SL comes from the model-split and parallel computing architectures, under which the server-client model updates are often imbalanced, and the client models are prone to detach from the server's model. To fix this problem, by carefully integrating local parallelism, federated learning, and mixup augmentation techniques, we propose a novel parallel SL framework, coined LocFedMix-SL. Simulation results corroborate that LocFedMix-SL achieves improved scalability, convergence speed, and latency, compared to sequential SL as well as the state-of-the-art parallel SL algorithms such as SplitFed and LocSplitFed.

Original languageEnglish
Title of host publicationWWW 2022 - Proceedings of the ACM Web Conference 2022
PublisherAssociation for Computing Machinery, Inc
Number of pages11
ISBN (Electronic)9781450390965
Publication statusPublished - 2022 Apr 25
Event31st ACM World Wide Web Conference, WWW 2022 - Virtual, Online, France
Duration: 2022 Apr 252022 Apr 29

Publication series

NameWWW 2022 - Proceedings of the ACM Web Conference 2022


Conference31st ACM World Wide Web Conference, WWW 2022
CityVirtual, Online

Bibliographical note

Funding Information:
This work was supported in part by The National Key Research and Development Program of China under grant 2020AAA0106000, the National Natural Science Foundation of China under U1936217, 61971267, 61972223, 61941117, 62171260, 61861136003, and the China Postdoctoral Science Foundation under 2021M691830.

Publisher Copyright:
© 2022 ACM.

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Software


Dive into the research topics of 'LocFedMix-SL: Localize, Federate, and Mix for Improved Scalability, Convergence, and Latency in Split Learning'. Together they form a unique fingerprint.

Cite this