Abstract

With the continuous emergence and innovation of computer technology, mobile robots are a relatively hot topic in the field of artificial intelligence. It is an important research area of more and more scholars. The core of mobile robots is to be able to realize real‐time perception of the surrounding environment and self‐positioning and to conduct self‐navigation through this information. It is the key to the robot’s autonomous movement and has strategic research significance. Among them, the goal recognition ability of the soccer robot vision system is the basis of robot path planning, motion control, and collaborative task completion. The main recognition task in the vision system is the omnidirectional vision system. Therefore, how to improve the accuracy of target recognition and the light adaptive ability of the robot omnidirectional vision system is the key issue of this paper. Completed the system construction and program debugging of the omnidirectional mobile robot platform, and tested its omnidirectional mobile function, positioning and map construction capabilities in the corridor and indoor environment, global navigation function in the indoor environment, and local obstacle avoidance function. How to use the local visual information of the robot more perfectly to obtain more available information, so that the “eyes” of the robot can be greatly improved by relying on image recognition technology, so that the robot can obtain more accurate environmental information by itself has always been domestic and foreign one of the goals of the joint efforts of scholars. Research shows that the standard error of the experimental group’s shooting and dribbling test scores before and the experimental group’s shooting and dribbling test results after the standard error level is 0.004, which is less than 0.05, which proves the use of soccer‐assisted robot‐assisted training. On the one hand, we tested the positioning and navigation functions of the omnidirectional mobile robot, and on the other hand, we verified the feasibility of positioning and navigation algorithms and multisensor fusion algorithms.

Highlights

  • In order to further improve the robot’s load capacity, movement flexibility, and for adaptability to small spaces, various types of omnidirectional mobile robots have emerged

  • The most important is the omnidirectional vision system, which collects images through hardware devices such as omnidirectional cameras and image capture cards to establish a model of the real environment, thereby controlling the robot to recognize the ball, goal, and other robots and providing information for the decision-making system

  • Image features extracted based on this characterization learning method often have good stability, so convolutional neural networks have huge advantages in processing two-dimensional image data

Read more

Summary

Introduction

In order to further improve the robot’s load capacity, movement flexibility, and for adaptability to small spaces, various types of omnidirectional mobile robots have emerged. The study of the relationship between color and lighting, the disclosure of its inherent stable relationship, and the development of powerful algorithms and technologies in lighting changes are among the most urgent research areas today. With the rapid development of deep learning technology, researchers are no longer satisfied with the basic theoretical analysis, and the requirements for practical applications are getting higher and higher, and the idea of creating a large-scale deep learning framework is being proposed [9]. The in-depth learning framework can contain basic algorithm modules and provide a solid foundation for the subsequent rapid construction of required models or training, coordination, testing, and development of existing models. Image features extracted based on this characterization learning method often have good stability, so convolutional neural networks have huge advantages in processing two-dimensional image data. If only k column vectors are selected from Y to represent Z, it will be equivalent to the following

C Shot test score E Shot test score
Shot and Dribble Test Data Analysis
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call