Abstract

With the great success of deep neural networks, combining deep learning with traditional dictionary learning has become a hot issue. However, the performance of these methods is still limited for several reasons. First, some existing methods update dictionary learning and classifier as two independent modules, which limits the classification performance. Second, the non-attention dictionary is learned to represent all images, reducing the model representation flexibility. In this paper, we design a novel end-to-end model named Multi-layer Attention Dictionary Pair Learning Network (MADPL-net), which integrates the learning schemes of the convolutional neural network, deep encoder learning, and attention dictionary pair learning (ADicL) into a unified framework. The encoder layer contains the ADicL block, which selects more image-attentive atoms in the dictionary pair block via the softmax function to ensure MADPL-net classification capability. In addition, ADicL schema can yield discriminative dictionary atoms and feature maps with high inter-class separation and high intra-class compactness. To improve the sparse representation learning performance, MADPL-net adds l1−norm constraint of the analysis dictionary to the cross-entropy loss function. Extensive experiments show that MADPL-net can achieve excellent performance over other state-of-the-arts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call