Abstract
The paper presents a deep learning based fully automatic object annotation technique for warehouse application usecase. One of the main challenges that is addressed in this paper is the large amount of manual labour involved in generating datasets for training a deep network. The proposed annotation model is developed by fine-tuning a deep network based object detection framework with ImageNet pre-trained models. We have used Faster RCNN network with pre-trained model VGG-16 and RFCN with ResNet-101. A small set of manually annotated images of single objects are used to automatically generate a dataset of significantly large size within a very short time duration (in real-time). The model also has the competence of precisely localizing the region of any new object that comes into the familiar background. Incorporation of techniques like color augmentation and affine transformation enables the network invariant to rotation, scale and brightness. Augmentation also enables the model to performs well even if the background is different. A clutter generation technique is introduced in the framework which makes the system capable of annotating objects even in a densely populated real-world environment. This work has another significant contribution in detection of objects those are used in Amazon Robotic Challenge (ARC) 2017 where our team was among the four finalist in both picking and stowing task. The automatically generated big dataset is further used to train multi-class detectors using Faster RCNN and RFCN networks to validate the performance of the proposed annotation model. The efficacy of the proposed model is hence demonstrated through various experimental results. The dataset is shared online for the convenience of the reader.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.