Abstract
Artificial Intelligence (AI) is rapidly transforming the healthcare, finance and transportation industries. This paper presents a field-programmable gate array (FPGA)-based neural network accelerator (NNA) design for power allocation in downlink nonorthogonal multiple access (NOMA) networks. The proposed hardware accelerator effectively cuts computational costs while delivering performance on par with the highest sum capacity. Numerical results show that this NNA offers a remarkable computational speed increase of up to 99% compared to the conventional exhaustive search method. Furthermore, the deep learning (DL) model achieved high accuracy (0.92 training, 0.93 testing), and the hardware accelerator design for this DL inference model was implemented on the PYNQ-Z2 board-constrained edge device to predict power allocation coefficients in NOMA systems.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have