Due to the stable feature representation capability provided by non-learned operators, the integration with deep learning models, i.e., non-learned operator based deep learning models, has become a paradigm, however, performance-wise, it is still not promising. In this paper, by revisiting non-learned operator based deep learning models, we reveal the reasons for their underperformance: lack of geometric invariance, insufficient sparsity, and neglect of directional importance. In response, we present a Lightweight Directional-Aware Network (LDAN) for image classification. Specifically, to generate sparse geometric-invariant features, we propose a ShearletNet to capture multi-directional features in three different levels. Then, a Directional-Aware module is designed to highlight the discriminative multi-directional features and generate multi-scale features. Finally, a Pointwise Convolution module is used to integrate the multi-directional features with the multi-scale ones for reducing the computational resources. Experiments on the commonly used CIFAR10, CIFAR100, Self-Taught Learning 10 (STL10), and Tiny ImageNet datasets demonstrate the efficiency and effectiveness of the proposed LDAN. Compared to the existing non-learned operator based models, LDAN reduces the parameter count by 80.83% while achieving a 6.32% increase in accuracy.
Read full abstract