Abstract
The primary objective of this research work is to converge into an accurate working algorithm for object recognition in a cluttered scene and subsequently helping the BAXTER robot to pick up the correct object in a cluttered environment. Feature matching algorithms usually fail to identify most of the object having no texture, hence deep learning methods have been employed for better performance. Although basic shallow Convolutional Neural Network (CNN) easily identifies the presence of an object within a frame, it is very difficult to localize the object location within the frame. This work primarily focuses on finding a solution for an accurate localization for robot grasping. In literature, YOLO (You Only Look Once) is found to be providing a very robust result on existing object recognition datasets. Due to high inaccuracy and presence of a huge redundant area within the bounding box, an algorithm was needed which will segment the object accurately and make the picking task easier. This was done through semantic segmentation using deep CNNs. Although time consuming, RESNET has been found to be very efficient as its post processed output helps to identify items in a significantly difficult task environment. This work has been done in light of recently held AMAZON robotics challenge 2017 where the robot successfully classified and distinguished a list of everyday items from a cluttered scenario. This work provides a performance analysis study comparing YOLO and RESNET based object recognition methods; and justifying through performance metrics (IOU - Intersection over Union) and ViG -Visual Grasping Score) that semantic segmentation (via RESNET) provides convenient results for robot vision module.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.