Abstract

This paper presents experimental results of partial object detection using YOLO (You Only Look Once) and partial depth estimation using CNN (Convolutional Neural Network) for application to a robot arm control. In recent years, image recognition and its application automation, as an alternative to advanced work, are attracting attention in various fields. In order for robot arms used in factories to perform high-value-added and flexible work, it is necessary to control them by object detection and recognition by deep learning. In this study, the authors propose a new approach for estimating the depth of partial images using YOLO and uses it to control the robot arm. In the experiments, both ends of a pen detected by YOLO are used for the input to a CNN. The detected parts are saved to images with a size of about 60 × 60 pixels, and the depths are estimated by giving the cropped images to the CNN. A desktop-sized robot with 4DOFs can successfully pick the pen by referring the depths. The effectiveness of the proposed method is demonstrated through experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call