Abstract

The overheads associated with feedback-based channel acquisition can greatly compromise the achievable rates of FDD based massive MIMO systems. Indeed, downlink (DL) training and uplink (UL) feedback overheads scale linearly with the number of base station (BS) antennas, in sharp contrast to TDD-based massive MIMO, where a single UL pilot trains the whole BS array. In this work, we propose a graph-theoretic approach to reducing DL training and UL feedback overheads in FDD massive MIMO systems. In particular, we consider a single-cell scenario involving a single BS with a massive antenna array serving to single-antenna mobile stations (MSs) in the DL. We assume the BS employs two-stage beamforming in the DL, comprising DFT pre-beamforming followed by MU-MIMO precoding. The proposed graph-theoretic approach exploits knowledge of the angular spectra of the BS-MS channels to construct DL training protocols with reduced overheads. Simulation results reveal that the proposed training-resources allocation method can provide approximately 35% sum-rate performance gain compared to conventional orthogonal training. Our analysis also sheds light into the impact of overhead reduction on channel estimation quality, and, in turn, achievable rates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call