Abstract

In this work, the deep learning (DL)-based fifth-generation (5G) non-orthogonal multiple access (NOMA) detector is investigated over the independent and identically distributed (i.i.d.) Nakagami- m fading channel conditions. The end-to-end system performance comparisons are given between the DL NOMA detector with the existing conventional successive interference cancelation (SIC)-based NOMA detector and from results, it has been proved that the DL NOMA detector performance is better than the convention SIC NOMA detector. In our analysis, the long-short term memory (LSTM) recurrent neural network (RNN) is employed, and the results are compared with the minimum mean square estimation (MMSE) and least square estimation (LS) detector’s performance considering all practical conditions such as multipath fading and nonlinear clipping distortion. It has been shown that with the increase in the relay to destination (RD) channel gain, the bit error rate (BER) improves. Also, with the increase in fading parameter m, the BER performance improves. The simulation curves demonstrate that when the clipping ratio (CR) is unity, the performance of the DL-based detector significantly improves as compared to the MMSE and LS detector for the signal-to-noise ratio (SNR) values greater than 15 dB and it proves that the DL technique is more robust to the nonlinear clipping distortion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call