Abstract

Self-supervised contrastive learning (CL) can learn high-quality feature representations that are beneficial to downstream tasks without labeled data. However, most CL methods are for image-level tasks. For the fine-grained change detection (FCD) tasks, such as change or change trend detection of some specific ground objects, it is usually necessary to perform pixel-level discriminative analysis. Therefore, feature representations learned by image-level CL may have limited effects on FCD. To address this problem, we propose a self-supervised global–local contrastive learning (GLCL) framework, which extends the instance discrimination task to the pixel level. GLCL follows the current mainstream CL paradigm and consists of four parts, including data augmentation to generate different views of the input, an encoder network for feature extraction, a global CL head, and a local CL head to perform image-level and pixel-level instance discrimination tasks, respectively. Through GLCL, features belonging to different perspectives of the same instance will be pulled closer, while features of different instances will be alienated, which can enhance the discriminativeness of feature representations from both global and local perspectives, thereby facilitating downstream FCD tasks. In addition, GLCL makes a targeted structural adaptation to FCD, i.e., the encoder network is undertaken by the common backbone networks of FCD, which can accelerate the deployment on downstream FCD tasks. Experimental results on several real datasets show that compared with other parameter initialization methods, the FCD models pretrained by GLCL can obtain better detection performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.