Abstract
Recent studies on semantic communication commonly rely on neural network (NN) based transceivers such as deep joint source and channel coding (DeepJSCC). Unlike traditional transceivers, these neural transceivers are trainable using actual source data and channels, enabling them to extract and communicate semantics. On the flip side, each neural transceiver is inherently biased towards specific source data and channels, making different transceivers difficult to understand intended semantics, particularly upon their initial encounter. To align semantics over multiple neural transceivers, we propose a distributed learning based solution, which leverages split learning (SL) and partial NN fine-tuning techniques. In this method, referred to as SL with layer freezing (SLF), each encoder downloads a misaligned decoder, and locally fine-tunes a fraction of these encoder-decoder NN layers. By adjusting this fraction, SLF controls computing and communication costs. Simulation results confirm the effectiveness of SLF in aligning semantics under different source data and channel dissimilarities, in terms of classification accuracy, reconstruction errors, and recovery time for comprehending intended semantics from misalignment.
Original language | English |
---|---|
Pages (from-to) | 15815-15819 |
Number of pages | 5 |
Journal | IEEE Transactions on Vehicular Technology |
Volume | 73 |
Issue number | 10 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:© 1967-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Automotive Engineering
- Aerospace Engineering
- Computer Networks and Communications
- Electrical and Electronic Engineering