Abstract

ABSTRACTLearning discriminative and robust features is crucial in remote sensing image processing. Many of the currently used approaches are based on Convolutional Neural Networks (CNNs). However, such approaches may not effectively capture various different semantic objects of remote sensing images. To overcome this limitation, we propose a novel end-to-end deep multi-feature fusion network (DMFN). DMFN combines two different deep architecture branches for feature representations; the global and local branch. The global branch, which consists of three losses, is used to learn discriminative features from the whole image. The local branch is then used in the partitioning of the entire image into multiple strips in order to obtain local features. The two branches are then combined, used to learn fusion feature representations for the image. The proposed method is an end-to-end framework during training. Comprehensive validation experiments on two public datasets indicate that relative to existing deep learning approaches, this strategy is superior for both retrieval and classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call