Abstract

Learning-based resource allocation can be implemented in real-time, but deep neural networks (DNNs) developed in other fields such as computer vision are with high training complexity and weak generalizability. Leveraging domain knowledge in communications is promising for learning wireless policies efficiently. In this paper, we propose a framework of integrating the Shannon formula with DNNs, and derive a data rate-based DNN (DRNN), for learning resource allocation by taking power allocation as an example. The DRNN is with an iterative structure with multiple update layers, each consisting of a pre-determined model function, an update network, and a dimension reduction network. To justify the iterative structure, we prove the existence of an iteration function that converges to the optimal policy for the update layer to learn. To justify the structure of each update layer, we provide the conditions for the iteration function to be a composite function of the model function. We further incorporate permutation equivariance properties into the DRNN. Simulation results show that the numbers of training samples and free parameters, and the training time to achieve a desired system performance can be reduced remarkably by harnessing the data rate model and PE prior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call