Abstract

While Deep Neural Networks (DNNs) achieve state-of-the-art performance in many fields, e.g., object recognition, they rely on deep networks with millions or even billions of parameters. Accelerating DNNs by reducing the parameters of DNNs is crucial for real-time object recognition. This paper presents an evolutionary approach to evolve efficient DNNs that can be run on Low-Performance Computing Hardware (LPCH) for real-time object recognition with the fastest possible speed and an accuracy of more than 95%. This approach achieves the goal by means of two design choices. First, NeuroEvolution of Augmenting Topologies (NEAT) is applied to evolve both weights and topology of DNNs starting from simple initial topology, which reduces the number of parameters of DNNs from millions to thousands. Second, we propose the novel fitness functions to further select the evolved DNNs for lower computation time, while maintaining high accuracy. We test the approach to evolve the efficient DNNs on the well-known benchmark MNIST dataset and the self-defined modular robots dataset. Furthermore, com-pared with most current studies, we not only evolve DNNs on the datasets but also implement the best evolved DNN on LPCH to recognize objects real-time in the real world. The experimental results show that the best evolved DNN recognizes the modular robots on a microcomputer, Raspberry Pi 3, with an accuracy of 95.6% and a speed of 5.3 fps. This work can be extended to achieve efficient DNNs for other real-time tasks. We published the source code1 that was used to evolve the efficient DNNs, and the video2 that the best evolved DNN was run on a Raspberry Pi 3 to recognize two modular robots simultaneously in the real world.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call