Abstract

We use a partially observable Markov decision process (POMDP) framework to design an optimal antenna selection policy for downlink transmit beamforming at a multi-antenna base station (BS) equipped with only a limited number of RF chains. Assuming that the channel state evolves according to a finite-state Markov process and that only the channel coefficients which correspond to previously selected antennas, are available at the BS, we use the POMDP framework for antenna selection with the aim to maximize the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">long-term expected downlink data rate</i> . To avoid the high computational complexity of the value iteration algorithm, we focus on the myopic policy and prove that <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in the case of positively correlated two-state Markov model for the channel over each antenna, the myopic policy is optimal for antenna selection for any number of RF chains</i> . Based on this finding, for general fading channels, we propose to quantize each channel into two levels and apply the myopic policy for antenna selection. Our simulation results show that using this two-state coarse channel quantization for antenna selection results in only a small loss in performance, as compared to the antenna selection technique which uses full channel state information without quantization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call