Abstract

Remote sensing image scene classification aims to commit the semantic labels according to the content of images. Convolutional Neural Network (CNN) is often used here to extract deep discriminative feature of remote sensing images for classification. In practice, CNN is usually trained by images in the Red Green Blue (RGB) color space. Whereas, CNN also can be trained by images in some other color spaces, e.g., Hue Saturation Value. The CNN models trained by images in diverse color spaces will perform differently because different color spaces often emphasize diverse color information. Thus, we present an Evidential Combination method with Multi-color Spaces (ECMS) to integrate the complementary information of different color spaces for classification performance improvement. In ECMS, labeled remote sensing images in the RGB color space are first converted into other color spaces, and then they are used to train CNN models, respectively. The soft classification results (of query images) yielded by these CNN models are combined by evidence theory. During fusion, the reliabilities/weights of these outputs of different CNN models are usually different, so they should not be equally treated for combination. In our approach, the weights are learnt by minimizing the mean squared error between the combination results and ground truth on labeled images. By doing this, weighted evidence combination of soft classification results is employed to make scene class decision. We conducted experiments on several datasets to verify the effectiveness of ECMS, and the results show ECMS can significantly improve classification accuracy compared with many existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call