Abstract

The current RoboCup Humanoid rules determine white balls, white goalposts and white field lines. In such an environment, the color-based heuristics algorithms used by RoboCup teams for computer vision do not present good detection performance. Convolutional neural networks (CNNs) are a straightforward solution to this challenge. Among the existing CNN-based techniques for object detection, the YOLO (You Only Look Once) algorithm is designed for real-time execution. In this paper, we created many neural networks architectures based upon YOLO so that we could identify which one had the best overall cost benefit between performance and execution time in a small humanoid robot. We assessed the algorithm's performance mainly by calculating the quantity of correct classifications and the mean average precision (mAP). A Convolutional Neural Network with an Inception-like module has showed promising results, running in real time on the robot's computer with Intel's OpenVINO inference framework. Getting right more than 88% of the classifications and achieving a mAP of 0.81, the developed neural network is prone to work effectively in the robot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call