Abstract

To alleviate the costly data annotation problem in deep learning-based object detection, we leverage the canonical view model for active sample selection to improve the effectiveness of learning. Inspired by the view-approximation model, we hypothesize that visual features learned from canonical views denote better representations of objects, thus boosting the effectiveness of object learning. We validate the hypothesis empirically in the context of robot learning for novel object detection. Based on this, we propose a novel on-line viewpoint exploration (OLIVE) method that (1) defines goodness-of-view by combining informativeness of visual features and consistency of model-based object detection, and (2) systematically explores and selects viewpoints to boost learning efficiency. Furthermore, we train a legacy Faster R-CNN model with a data augmentation method while leveraging data samples generated by the OLIVE pipeline. We test our method on the T-LESS dataset and show that the proposed method outperforms competitive benchmarking methods, especially when the samples are few.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.