Abstract

The cascade of convolution layers and the end-to-end training process facilitate CNN feature extraction and transmission, and promote the success of CNN in image processing. However, the drawback of heavily relying on large-scale high-quality training samples restricts its applications. To avoid costly and unrealistic manual annotations for large-scale remote sensing images, existing land cover maps are considered as an alternative to manual annotations, in which noisy labels are inevitable. To alleviate the impact of noisy labels, this paper proposes to improve the consistency feature learning ability of CNNs as a feasible solution in practical land cover mapping. Firstly, an intraclass feature consistency constraint is introduced to maintain the consistency of CNN feature maps for the same class. Then, an inter-iteration feature consistency constraint is employed to guide the network to learn features that are consistent with the whole underlying distribution inside a mini-batch. These two feature consistency constraints work in a cooperative and complementary manner with the traditional cross-entropy, and together improve the consistency feature learning ability of the proposed Feature Consistency Network (FCNet). Experimental results demonstrate the effectiveness of the proposed FCNet. Extensive experiments on different network structures validate the generalization of the proposed feature consistency constraints.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call