Abstract
With rapidly developing high-speed wireless communications, the 60 GHz millimeter-wave (mm-wave) frequency range has attracted extensive interests, and radio-over-fiber (RoF) systems have been widely investigated as a promising solution to deliver mm-wave signals. Neural networks have been proposed and studied to improve the mm-wave RoF system performances at the receiver side by suppressing both linear and nonlinear impairments. However, previous studies of neural networks in mm-wave RoF systems all focus on the use of off-line processing with high-end GPUs or CPUs, which are not practical for low power-consumption, low-cost and limited computation platform applications. To solve this issue, in this paper we investigate neural network hardware accelerator implementations for mm-wave RoF systems for the first time using the field programmable gate array (FPGA), taking advantage of the low power consumption, parallel computation, and reconfigurablity features of FPGA. Both the convolutional neural network (CNN) and binary convolutional neural network (BCNN) hardware accelerators are demonstrated. In addition, to satisfy the low-latency requirement in mm-wave RoF systems and to enable the use of low-cost compact FPGA devices, a novel inner parallel computation optimization method for implementing CNN and BCNN on FPGA is proposed. It is shown that compared with the popular embedded processor (ARM Cortex A9) execution latency, the proposed FPGA-based hardware accelerator reduces the processing delay in mm-wave RoF systems by about 99.45% and 92.79% for CNN and BCNN, respectively. Compared with non-optimized FPGA implementations, results show that the proposed inner parallel computation method reduces the processing latency by about 44.93% and 45.85% for CNN and BCNN, respectively. In addition, compared with the GPU implementation, the latency of CNN implementation with the proposed optimization method is reduced by 85.49%, while the power consumption is reduced by 86.91%. Although the latency of BCNN implementation with the proposed optimization method is larger compared with the GPU implementation, the power consumption is reduced by 86.14%. The demonstrated FPGA-based neural network hardware accelerators provide a promising solution for mm-wave RoF systems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.