Abstract

This paper provides an in-depth study and analysis of robot vision features for predictive control and a global calibration of their feature completeness. The acquisition and use of the complete macrofeature set are studied in the context of a robot task by defining the complete macrofeature set at the level of the overall purpose and constraints of the robot vision servo task. The visual feature set that can fully characterize the macropurpose and constraints of a vision servo task is defined as the complete macrofeature set. Due to the complexity of the task, a part of the features of the complete macrofeature set is obtained directly from the image, and another part of the features is obtained from the image by inference. The task is guaranteed to be completely based on a robust calibration-free visual serving strategy based on interference observer that is proposed to complete the visual serving task with high performance. To address the problems of singular values, local minima, and insufficient robustness in the traditional scale-free vision servo algorithm, a new scale-free vision servo method is proposed to construct a dual closed-loop vision servo structure based on interference observer, which ensures the closed-loop stability of the system through the Q-filter-based interference observer, while estimating and eliminating the interference consisting of hand-eye mapping model uncertainty and controlled robot input interference. The equivalent interference consisting of hand-eye mapping model uncertainty, controlled robot input interference, and detection noise is estimated and eliminated to obtain an inner-loop structure that presents a nominal model externally, and then an outer-loop controller is designed according to the nominal model to achieve the best performance of the system dynamic performance and robustness to optimally perform the vision servo task.

Highlights

  • Computational Intelligence and Neuroscience reaching impact on the country’s comprehensive national power and its ability to develop sustainably. erefore, robotics is seen as a frontier technology and taken as a national strategy by many developed countries in the world, which gives strong support to the research in this field

  • A large amount of research has focused on visual simultaneous localization and mapping (VSLAM), which is the study of environmental modelling. e process requires the system to build the environment while using the known local environment to achieve self-localization and incrementally model the environment [2]

  • In the visual navigation process, the captured image sequences are used to achieve online incremental environment perception and autonomous localization using VSLAM. e concept of feature completeness for robot vision servo tasks is proposed, and the complete feature set is defined to consist of a complete microscopic feature set and a complete macroscopic feature set

Read more

Summary

Jingjing Lou

Real-time modelling for the unstructured environment in the human-robot collaboration scenario can provide global process information for subsequent motion planning, which is a key technology for visual perception. It mainly studies the influence of different feature sets on the accomplish ability and completion performance of an uncalibrated visual serving task. After analysis, in unstructured working scenes, the field of view of a single depth camera is affected by obstacle occlusion, so it is necessary to introduce multiple depth cameras to form a high real-time global vision system to obtain more complete environmental information and improve the success rate and safety of motion planning.

YuV transform Uor V coded cortical
Hue histogram
Length Number of cells
Results and Analysis
Global features of the environment
Requiring higher performance of global localization
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call