Abstract

Conventional approaches to robot navigation in unstructured environments rely on information acquired from the LiDAR mounted on the robot base to detect and avoid obstacles. This approach fails to detect obstacles that are too small, or that are invisible because they are outside the LiDAR’s field of view. A possible strategy is to integrate information from other sensors. In this paper, we explore the possibility of using depth information from a movable RGB-D camera mounted on the head of the robot, and investigate, in particular, active control strategies to effectively scan the environment. Existing works combine RGBD-D and 2D LiDAR data passively by fusing the current point-cloud from the RGB-D camera with the occupancy grid computed from the 2D LiDAR data, while the robot follows a given path. In contrast, we propose an optimization strategy that actively changes the position of the robot’s head, where the camera is mounted, at each point of the given navigation path; thus, we can fully exploit the RGB-D camera to detect, and hence avoid, obstacles undetected by the 2D LiDAR, such as overhanging obstacles or obstacles in blind spots. We validate our approach in both simulation environments to gather statistically significant data and real environments to show the applicability of our method to real robots. The platform used is the humanoid robot R1.

Highlights

  • If the RGB-D camera is actuated—as is the case for the humanoid robot used in this paper—it is possible to actively control it to efficiently scan the environment to detect obstacles before they collide with the robot

  • We propose a method for an efficient active exploration of the environment to overcome the limitations of sensors with small field of view (FOV); We propose a method which is independent of the navigation stack used; it is possible to use our approach with different path planning algorithms, even in combination with [3,4,5] that perform local navigation with partial information

  • The RGB-D camera is mounted on the R1 head whose joint limits are yaw:

Read more

Summary

Introduction

2D LiDAR sensors provide accurate measurements of the robot’s distance to walls and other obstacles. They usually have a limited field of view (FOV) due to occlusions with other robot parts (i.e., the wheels); in addition, they can detect only obstacles at a certain, fixed, height. RGB-D cameras provide rough measurements of 3D surfaces but with lower accuracy than the LiDAR and with a limited field of view, which proved to increase latency in obstacle detection [2]. If the RGB-D camera is actuated—as is the case for the humanoid robot used in this paper—it is possible to actively control it to efficiently scan the environment to detect obstacles before they collide with the robot

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.