Abstract

Self-supervised learning is an active area of research that has had great advances recently. It is concerned with overcoming the bottleneck of any supervised deep learning framework that relies on hand-crafted labels to achieve good results. Several state-of-the-art self-supervised algorithms have been proposed recently, where most of these methods were designed to learn from ImageNet. Because ImageNet is an object-centric dataset, little focus was given in the design of the pretext task to adapt to complex scene understanding tasks such as multi-label classification or object detection. In this work, we investigate the question of whether applying mix-based augmentations in the self-supervised pretraining would lead to better transferability to the complex downstream task of multi-label classification. SimSiam, a non-contrastive self-supervised algorithm, is used for pretraining along with three proposed variations of mix-based augmentations. The task of multi-label classification on PascalVOC as a non-object centric dataset is selected for evaluation of the learned features. The SimSiam with the use of the UnMix augmentation technique achieved the best performance among all our experiments with higher Mean Average Precision (mAP) than the baseline by (+3.7). We conclude that lowering the mixing probability and using the new mixed image pairs as additions to the loss function is more efficient than replacing the original images completely. Our results confirm that more research that targets multi-object centric datasets is needed in the self-supervised domain. It also shows that applying mix-based augmentations to an already existing self-supervised algorithm improves downstream performance on multi-object centric datasets. Keywords: Self-Supervised Learning, Non-Contrastive Learning, UnMix, SimSiam, Mix-Based Augmentations DOI: https://doi.org/10.35741/issn.0258-2724.58.1.66

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.