Abstract

Sufficient labeled training data may not be available for pedestrian detection in many real-world scenes. Semi-supervised settings naturally apply for the case where an adequate number of images are collected in a target scene but only a small proportion of them can be manually annotated. A common strategy is to adopt a detector trained on a well-established dataset (source data) or the limited annotated data to pseudo-annotate unannotated images. However, the domain gap and the lack of supervision in the target scene may lead to low-quality pseudo annotations. In this paper, we propose a Scene-adaptive Pseudo Annotation (SaPA) approach, which aims at exploiting two types of training data: source data providing sufficient supervision and unannotated target data offering domain-specific information. To utilize the source data, an Annotation Network (AnnNet) competes with a domain discriminator to learn domain-invariant features. To exploit the unannotated data, we temporally aggregate the parameters of AnnNet to build a more robust network, which is able to provide training goals for AnnNet. This new approach improves the generalization performance of AnnNet, which eventually leads to high-quality pseudo annotations to the unannotated data. Both manual and pseudo annotations are leveraged to train a more precise and scene-specific detector. We perform extensive experiments on multiple benchmarks to verify the effectiveness and superiority of SaPA.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.