Deep reinforcement learning-based distributed congestion control in cellular V2X networks

Joo Young Choi, Han Shin Jo, Cheol Mun, Jong Gwan Yook

Research output: Contribution to journalArticlepeer-review

14 Citations (Scopus)


Distributed congestion control (DCC) improves system performance by lowering channel congestion in vehicular environments with high vehicle density. The 3rd Generation Partnership Project standard defines the related metrics of channel busy ratio (CBR) and introduces possible rate and power control mechanisms to mitigate channel congestion in cellular vehicle-to-everything (C-V2X) sidelink. However, the DCC of C-V2X is not sufficiently specified to implement these controls. In this letter, we propose a novel DCC algorithm based on deep reinforcement learning (DRL) to improve congestion control performance in C-V2X sidelink. The proposed algorithm allows the DRL agent to observe a CBR state and select the packet transmission rate that can maximize the reward of packet delivery rate (PDR) while maintaining higher channel utilization. Simulation results show that the proposed algorithm provides performance gain in terms of PDR and sidelink throughput compared with the existing DCC method.

Original languageEnglish
Pages (from-to)2582-2586
Number of pages5
JournalIEEE Wireless Communications Letters
Issue number11
Publication statusPublished - 2021 Nov 1

Bibliographical note

Publisher Copyright:
© 2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Electrical and Electronic Engineering


Dive into the research topics of 'Deep reinforcement learning-based distributed congestion control in cellular V2X networks'. Together they form a unique fingerprint.

Cite this