Abstract

This paper presents an omnidirectional RGB-D (RGB + Distance fusion) sensor prototype using an actuated LIDAR (Light Detection and Ranging) and an RGB camera. Besides the sensor, a novel mapping strategy is developed considering sensor scanning characteristics. The sensor can gather RGB and 3D data from any direction by toppling in 90 degrees a laser scan sensor and rotating it about its central axis. The mapping strategy is based on two environment maps, a local map for instantaneous perception, and a global map for perception memory. The 2D local map represents the surface in front of the robot and may contain RGB data, allowing environment reconstruction and human detection, similar to a sliding window that moves with a robot and stores surface data.

Highlights

  • In robotics, visual sensors are responsible for providing robots with environmental data about where they are located

  • This paper introduces an omnidirectional 3D sensor that takes advantage of RGB and spatial data to perform a novel mapping technique paired with object identification using machine learning, gathering point clouds from a rotating LIDAR and using a camera attached to a hyperbolic mirror

  • This paper aims to present a novel mapping approach based on a sliding window to represent the environment through the inputs of an omnidirectional RGB-D sensor

Read more

Summary

Introduction

Visual sensors are responsible for providing robots with environmental data about where they are located. An alternative solution would be disposing sensors in a rotating platform, independent of the robot’s movement, but this still does not allow the robot to look in more than two directions at the same time These drawbacks justify the use of an omnidirectional source of perception in dynamic environments, independent of other sources. The environment perception is more reliable with the use of several approaches collecting spatial data (spatial sensors + RGB cameras), which increases the robot’s versatility and compensates the downsides of each source. This paper introduces an omnidirectional 3D sensor that takes advantage of RGB and spatial data to perform a novel mapping technique paired with object identification using machine learning, gathering point clouds from a rotating LIDAR and using a camera attached to a hyperbolic mirror. This data can be used to represent recognizable objects on the map and identifying dynamic entities (people) to assign obstacles better when mapping

Problem Statement
The Proposed Omnidirectional RGB-D Sensor
Planar Perception
Spatial Perception
The Proposed Strategy of Sliding Window Mapping
Experimental Evaluation
Reconstruction Experiment
Navigation Experiment
Human Detection Experiment
Movement Prediction Experiment
Accuracy and Precision
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.