Abstract

Currently, remote-sensing (RS) scene classification has played an important role in many practical applications. However, traditional methods that are based on deep convolutional neural networks (DCNNs) have several difficulties when faced with data shift problems: novel classes, varied orientations, and large intraclass variations of RS scene images. In this letter, we propose the rotation-invariant and discriminative-learning prototypical networks (RDPNs) for RS scene classification. RDPN uses A-ORConv32 basic blocks and attention mechanisms to obtain rotation-invariant and discriminative features. In addition, adaptive cosine center loss is proposed to constrain the features to mitigate the large intraclass variations and penalize the hard samples adaptively. We conduct extensive experiments on publicly available datasets and achieve 1.69%–19.38% higher accuracy than existing methods. The experimental results verify that the proposed RDPN can solve the data shift problems well in RS scene classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call