Abstract

Recently, remote sensing image scene classification technology has been widely applied in many applicable industries. As a result, several remote sensing image scene classification frameworks have been proposed; in particular, those based on deep convolutional neural networks have received considerable attention. However, most of these methods have performance limitations when analyzing images with large intraclass variations. To overcome this limitation, this letter presents the marginal center loss with an adaptive margin. The marginal center loss separates hard samples and enhances the contributions of hard samples to minimize the variations in features of the same class. Experimental results on public remote sensing image scene data sets demonstrate the effectiveness of our method. After the model is trained using the marginal center loss, the variations in the features of the same class are reduced. Furthermore, a comparison with state-of-the-art methods proves that our model has competitive performance in the field of remote sensing image scene classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call