Abstract

In this paper, we propose a complementary self-supervised mask model based on teacher-student networks. This model contains a student network, a teacher network, and a mask prediction module. The student's network is an encoder structure, and the teacher's network consists of encoders and decoders. The teacher and student network encoders are used for learning image representations and have the same network structure and model parameters. The pre-training has two pre-tasks: First, the mask image block representation predicted by the decoder in the teacher network predicts the actual image pixels through the mask prediction module. Then, we introduce a comparative learning loss function to compare the outputs of the teacher and student modules in representation space. This paper proposes a complementary masking mechanism to reduce the gap between upstream and downstream mismatches in the pre-training model based on mask image modeling (MIM). For example, a complete picture is an input into the teacher and the student network. For the teacher network, the input picture is randomly masked off, for example, 75 %; the student network masks the remaining part of the input picture, 25 %. The student network masks the rest (25%) of the input image. The pre-trained model proposed in this paper has been pre-trained on COCO and other data sets, and downstream tasks are performed on four conventional data sets. By comparing with some of the latest self-supervised pre-trained models, it is proved that the pre-trained model proposed in this paper can learn better representational information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call