Abstract

The computational complexity of leveraging deep neural networks for extracting deep feature representations is a significant barrier to its widespread adoption. This is particularly a bottleneck for use in embedded devices and application such as self-driving cars. One promising strategy to addressing the complexity issue is the notion of evolutionary synthesis of deep neural networks. It was demonstrated that it successfully produces highly efficient deep neural networks while retaining modeling performance. Here, we further extend upon the evolutionary synthesis strategy for achieving efficient feature extraction. A stress-induced evolutionary synthesis framework is proposed where the stress signals are imposed upon the synapses of a deep neural network during training step. This process induces stress and steers the synthesis process towards the production of more efficient deep neural networks over successive generations. As a result, it improves model fidelity at a greater efficiency. Applying stress during the training phase helps a network to adopt itself for the changes which would happen at the evolution step. The proposed stress-induced evolutionary synthesis approach is evaluated on a variety of different deep neural network architectures (LeNet5, AlexNet, and YOLOv2), different tasks (object classification and object detection) to synthesize efficient StressedNets over multiple generations. Experimental results demonstrate the efficacy of the proposed framework to synthesize StressedNets with significant improvement in network architecture efficiency (e.g., 40 × for AlexNet and 33 × for YOLOv2). It is also shown the speed improvements by the synthesized networks (e.g., 5.5 × inference speed-up for YOLOv2 on an Nvidia Tegra X1 mobile processor).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call