Abstract

Evolutionary Neural Architecture Search (ENAS) has achieved prominent achievements over the past years. Unfortunately, a majority of existing ENAS algorithms require enormous computational resources to design architectures. To achieve efficient search and superior performance, this paper proposes a novel attention mechanism and a compact architecture backbone for the evolutionary search of neural networks. A ‘Group Whitening Residual Block’ (GRBlock) is designed to utilize the merits of whitening operation and prevents the disadvantages from mini-batch normalization, thus enhancing the robustness of network architectures. In particular, a novel but effective ‘Large Kernel Subspace Attention Mechanism’ (LKSAM) is proposed to enhance the efficiency and representational capacity of neural networks, which is capable of extracting the complex interaction information between diverse channels with few parameters while achieves the multi-scale and multi-frequency feature representation. Experimental results on broadly used datasets reveal the superiority of our method in classification accuracy, architecture complexity, and computational overheads. Furthermore, the proposed algorithm is evaluated on traffic sign recognition task, and outperforms the manually-designed method and other algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call