Abstract

In this paper, we investigate designs of linear precoders for vector Gaussian channels via stochastic optimizations and deep neural networks (DNNs). We assume that the channel inputs are drawn from practical finite alphabets, and we search for precoders maximizing the mutual information between channel inputs and outputs. Though the problem is generally non-convex, we prove that when the right singular matrix of precoder is fixed, any local optima of this problem is a global optima. Based on this fact, an efficient projected stochastic gradient descent (PSGD) algorithm is designed to search the optimal precoders. Moreover, to reduce the complexity of calculating a posterior means involved in gradients calculation, K-best algorithm is adopted to make approximations of a posterior means with negligible loss of accuracy. Furthermore, to avoid explicit calculation of mutual information and its gradients, DNN-based autoencoders (AEs) are constructed for this precoding task, and an efficient training algorithm is proposed. We also prove that the AEs, with ‘softmax’ activation function and ‘categorical cross entropy’ loss, maximize the mutual information under reasonable assumptions. Then, in order to extend the AE methods to large scale systems, ‘sigmoid’ activation function and ‘binary cross entropy’ loss are used such that the size of AEs will not grow prohibitively large. We prove that this maximizes a lower bound of the mutual information under reasonable assumptions. Finally, to make the precoders practical for high speed wireless scenarios, we propose an offline training paradigm which trains DNNs to infer optimal precoders given channel state information instead of training online for every different channel. Simulation results show that all the proposed methods work well in maximizing mutual information and improving bit error rate (BER) performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.