Abstract

In this paper, we propose a deep learning (DL) model to estimate the channel matrix for a massive multi-input multi-output (MIMO) system with a one-bit analog-to-digital converter (ADC). In the literature, various DL methods were proposed to estimate the channel matrix; however, they were limited to a small range of noise levels. We are driven to break the data appropriately so that the estimation performance improves with the signal-to-noise ratio (SNR) because the primary premise is that the DL model should perform better in the case of higher SNR. We suggested breaking the received signal into sequences to consider the entire system as multiple sub-systems, where each antenna serves the same number of users considered as a sub-system. Consequently, the model trained the data received by each sub-system sequentially. However, the conventional back-propagation with extensive training, because of the large number of base station (BS) antennas, causes a vanishing gradient, leading to training failure. Hence, we combine long short-term memory (LSTM) with gated recurrent units (GRU) to accelerate the training speed of the network under the hypothesis that LSTM effectively processes MIMO data and overcomes the vanishing gradient problem, whereas GRU accelerates the convergence rate. The results show that our model outperforms other methods at different SNRs and various numbers of BS antennas.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call