Abstract

ABSTRACT In recent years, the convolutional neural network (CNN) plays a vital role in hyperspectral image classification and performs more competitively than many other methods. However, in order to pursue better performance, most of existing CNN-based methods just simply stack rather deep convolutional layers. Although they improve the classification accuracy to a certain extent, they result in plenty of network parameters. In this paper, a light-weighted directionally separable dilated CNN with hierarchical attention feature fusion (DSD-HAFF) is proposed to solve these problems. First, two global dense dilated CNN branches that focus on two spatial directions separately are constructed to extract and reuse spatial information as much as possible. Second, a hierarchical attention feature fusion branch that consists of several coordinate attention blocks (CABs) is constructed. Hierarchical features from two directionally separable dilated CNN branches are adopted as inputs of CABs. In this way, the structure can not only fully incorporate hierarchical features, but also significantly reduce the network parameters. Meanwhile, the hierarchical attention feature fusion branch incorporates features from high-level to low-level in the kernel-number pyramid strategy. Experimental results on three popular benchmark datasets demonstrate that the DSD-HAFF achieves better performance and has a much smaller number of network parameters than the other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call