Abstract
Competitive majority network trained by error correction (C-Mantec), a recently proposed constructive neural network algorithm that generates very compact architectures with good generalization capabilities, is implemented in a field programmable gate array (FPGA). A clear difference with most of the existing neural network implementations (most of them based on the use of the backpropagation algorithm) is that the C-Mantec automatically generates an adequate neural architecture while the training of the data is performed. All the steps involved in the implementation, including the on-chip learning phase, are fully described and a deep analysis of the results is carried on using the two sets of benchmark problems. The results show a clear increase in the computation speed in comparison to the standard personal computer (PC)-based implementation, demonstrating the usefulness of the intrinsic parallelism of FPGAs in the neurocomputational tasks and the suitability of the hardware version of the C-Mantec algorithm for its application to real-world problems.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have