Abstract

Existing multi-view object classification algorithms usually rely on sufficient labeled multi-view objects, which substantially restricts their scalability to novel classes with few annotated training samples in real-world applications. Aiming to go beyond these limitations, we explore a novel yet challenging task, few-shot multi-view object classification (FS-MVOC), which expects the network to build its classification ability efficiently based on limited labeled multi-view objects. To this end, we design a dual augmentation network (DANet) to provide excellent performance for the under-explored FS-MVOC task. On the one hand, we employ an attention-guided multi-view representation augmentation (AMRA) strategy to help the model focus on salient features and suppress unnecessary ones on multiple views of multi-view objects, resulting in more discriminative multi-view representations. On the other hand, during the meta-training stage, we adopt the category prototype augmentation (CPA) strategy to improve the class-representativeness of each prototype and increase the inter-prototype difference by injecting Gaussian noise in the deep feature space. Extensive experiments on the benchmark datasets (Meta-ModelNet and Meta-ShapeNet) indicate the effectiveness and robustness of DANet.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.