Abstract

In this paper, we develop a gradient-free optimization methodology for efficient resource allocation in Gaussian MIMO multiple access channels. Our approach combines two main ingredients: (i) an entropic semidefinite optimization based on matrix exponential learning (MXL); and (ii) a one-shot gradient estimator which achieves low variance through the reuse of past information. This novel algorithm, which we call gradient-free MXL algorithm with callbacks (MXL0$^{+}$), retains the convergence speed of gradient-based methods while requiring minimal feedback per iteration$-$a single scalar. In more detail, in a MIMO multiple access channel with $K$ users and $M$ transmit antennas per user, the MXL0$^{+}$ algorithm achieves $\epsilon$-optimality within $\text{poly}(K,M)/\epsilon^2$ iterations (on average and with high probability), even when implemented in a fully distributed, asynchronous manner. For cross-validation, we also perform a series of numerical experiments in medium- to large-scale MIMO networks under realistic channel conditions. Throughout our experiments, the performance of MXL0$^{+}$ matches$-$and sometimes exceeds$-$that of gradient-based MXL methods, all the while operating with a vastly reduced communication overhead. In view of these findings, the MXL0$^{+}$ algorithm appears to be uniquely suited for distributed massive MIMO systems where gradient calculations can become prohibitively expensive.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.