Abstract

Technological innovations in the hardware of RGB-D sensors have allowed the acquisition of 3D point clouds in real time. Consequently, various applications have arisen related to the 3D world, which are receiving increasing attention from researchers. Nevertheless, one of the main problems that remains is the demand for computationally intensive processing that required optimized approaches to deal with 3D vision modeling, especially when it is necessary to perform tasks in real time. A previously proposed multi-resolution 3D model known as foveated point clouds can be a possible solution to this problem. Nevertheless, this is a model limited to a single foveated structure with context dependent mobility. In this work, we propose a new solution for data reduction and feature detection using multifoveation in the point cloud. Nonetheless, the application of several foveated structures results in a considerable increase of processing since there are intersections between regions of distinct structures, which are processed multiple times. Towards solving this problem, the current proposal brings an approach that avoids the processing of redundant regions, which results in even more reduced processing time. Such approach can be used to identify objects in 3D point clouds, one of the key tasks for real-time applications as robotics vision, with efficient synchronization allowing the validation of the model and verification of its applicability in the context of computer vision. Experimental results demonstrate a performance gain of at least 27.21% in processing time while retaining the main features of the original, and maintaining the recognition quality rate in comparison with state-of-the-art 3D object recognition methods.

Highlights

  • With technological advances experimented in hardware, artificial vision systems can capture and process real-world depth data in addition to color information

  • To better comprehend the approach to multifoveation in point cloud proposed in this paper that can be used for robotic vision, we present the basics of multi-resolution and the previous foveation approaches, starting with the single foveation that has allowed performing real-time tasks mainly for robotic vision

  • An expected fact is that the sum of the point quantities of the Foveated covering the rightmost object (FCRM) and Foveated covering the leftmost object (FCLM) strategies is equal to the amount of the raw strategy, so that 49,679 plus 47,776 equals 97,455, values of the respective experiments cited

Read more

Summary

Introduction

With technological advances experimented in hardware, artificial vision systems can capture and process real-world depth data in addition to color information. The use of these data, inherent to the 3D space, becomes an interesting alternative for execution of tasks in robotic vision, in real time. This is the basis of the system developed in this work that deals with the real-time capture and interpretation of data in three-dimensional format, since they offer more details in the abstracted information. Robotic vision applications use several types of sensors to capture stimuli data.

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.