Abstract
Although some interesting routing algorithms based on HNN were already proposed, they are slower when compared to other routing algorithms. Since HNN are inherently parallel, they are suitable for parallel implementations on parallel platforms, such as Field Programmable Gate Arrays (FPGA) and Graphic Processing Units (GPU). In this chapter, the authors show parallel implementations of a routing algorithm based on Hopfield Neural Networks (HNN) for GPU and for FPGAs, considering some implementation issues. They analyze the hardware limitation on the devices, the memory bottlenecks, the complexity of the HNN, and, in the case of GPU implementation, how the kernel functions should be implemented, as well as, in the case of the FPGA implementation, the accuracy of the number representation and memory storage on the device. The authors perform simulations for one variation of the routing algorithm for three communication network topologies with increasing number of nodes. They achieved speed-ups up to 78 when compared the FPGA model simulated to the CPU sequential version and the GPU version is 55 times faster than the sequential one. These new results suggest that it is possible to use the HNN to implement routers for real networks, including optical networks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.