Abstract

Recently, remote-sensing scene classification has become an essential primary research topic. Nowadays, scholars have proposed various few-shot remote-sensing scene classification methods to achieve superior performance with few labeled data. Most of the prior work utilized a meta-learning strategy, which suffered from too little data affecting performance. In this letter, we apply the pre-trained feature extractor for image embedding. Meanwhile, because of the negative transfer problem caused by the inadaptability of the pre-trained feature extractor to remote-sensing data, we propose to exploit two pre-trained models to classify the remote-sensing scene, respectively. Then we fuse the decision to obtain the final classification category. We design a decision attention module to automatically update combination weights for each decision. It comprehensively considers the contribution of various decisions and further improves the discrimination of features. We conduct comprehensive experiments to validate the method and achieve state-of-the-art performance on two benchmark remote-sensing scene datasets, namely NWPU-RESISC45 and UC Merced.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call