Abstract

In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task.

Highlights

  • High spatial resolution ranging is crucial in robot manipulation and a depth map is necessary to accomplish the task

  • We propose a scattering removal technique, called descattering, followed by a standard stereo method where we focus on how to remove the scattering efficiently for stereo vision

  • We present our descattering method, which can enhance images corrupted by strong non-uniform backscattering from an active illumination source

Read more

Summary

Introduction

High spatial resolution ranging is crucial in robot manipulation and a depth map is necessary to accomplish the task. There are many cases where the system works in low visibility and strong scattering environments, such as underwater robots or firefighting robots. Our application is bipedal and quadrupedal robots working in nuclear power plants where they must cope with poor visibility due to dense steam. The plant is filled with very dense water-based atmospheric particles, and the robot needs to operate the plant. The commonly used sensors such as LiDAR (LMS511, SICK, Waldkirch, Germany and UTM-30LX-EW, Hokuyo, Osaka, Japan) and time of flight (ToF) camera (Kinect v2, Microsoft, Redmond, WA, USA) are unable to work in such low visibility conditions. Our conclusion is consistent with the study by Starr and Lattimer [1]

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.