Abstract

In this paper, we formulate a long-term resource allocation problem of non-orthogonal multiple access (NOMA) downlink system for the satellite-based Internet of Things (S-IoT) to achieve the optimal decoding order and power allocation. This long-term resource allocation problem of the satellite NOMA downlink system can be decomposed into two subproblems, i.e., a rate control subproblem and a power allocation subproblem. The latter is a non-convex problem and the solution of which relies on both queue state and channel state. However, the queue state and the channel state continually change from one time slot to another, which makes it extremely strenuous to characterize the optimal decoding order of successive interference cancellation (SIC). Therefore, we explore the weight relationship between the queue state and the channel state to derive an optimal decoding order by leveraging deep learning. The proposed deep learning-based long-term power allocation (DL-PA) scheme can efficiently derive a more accurate decoding order than the conventional solution. The simulation results show that the DL-PA scheme can improve the performance of the S-IoT NOMA downlink system, in terms of long-term network utility, average arriving rate, and queuing delay.

Highlights

  • With the acceleration of internet of everything (IoE) process, the demand of anywhere and anytime broadband access capability is becoming more and more urgent [1]

  • Our proposed long-term resource allocation problem is non-convex and lack of successive interference cancellation (SIC) decoding order model knowledge, we adopt the concept of universal function approximation of deep neural networks and develop a deep learning-based approach to train the model of SIC decoding order

  • Note that the focus of this paper is to study the optimal resource allocation scheme within a certain non-orthogonal multiple access (NOMA) group, and we regard that all the UEs covered by a same spot beam

Read more

Summary

INTRODUCTION

With the acceleration of internet of everything (IoE) process, the demand of anywhere and anytime broadband access capability is becoming more and more urgent [1]. In [24] and [25], DL is introduced to learn optimal resource allocation policies in wireless communication systems Similar to these works, our proposed long-term resource allocation problem is non-convex and lack of SIC decoding order model knowledge, we adopt the concept of universal function approximation of deep neural networks and develop a deep learning-based approach to train the model of SIC decoding order. It is worth noting that our proposed long-term power allocation subproblem is non-convex, and the solution of which depends on the SIC decoding order determined by both queue state and channel state.

SYSTEM MODEL
IMPROVE THE SIC DECODING ORDER VIA DEEP LEARNING METHOD
SIMULATION AND ANALYSIS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call