Abstract

Accurate link adaptation in 5G is a major challenge as it supports a wide range of services, including ultra-reliable low-latency communication (URLLC). URLLC has very strict latency and reliability constraints. The diverse and fast fading channel conditions result in channel quality indicator (CQI) feedback from user equipments (UEs) being outdated at the base station (BS). The CQI values are used at the BS to perform link adaptation by assigning optimal modulation and coding scheme (MCS) according to the reported CQI value. This results in the allocation of either a higher MCS value than required, which affects the reliability; or a lower MCS value than required, which affects the spectral efficiency and latency. Thus, there is a need for novel methods to perform the link adaptation in the case of URLLC. In this paper, we propose a reinforcement learning (RL) based intelligent link adaptation in a time-correlated and fast fading channel. The RL-based method can intelligently predict the future CQI values and accordingly allocate the MCS for data transmission. Here we use a contextual multi-armed bandit (MAB) algorithm for link adaptation. The proposed method is then compared with the baseline outer loop link adaptation (OLLA) method. Simulation results show that the RL-based method has better performance in terms of both reliability and spectral efficiency than the OLLA based scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call