Abstract

The binocular vision systems provide a wealth of visual information for UGVs. However, the dust, soil and water droplets in the environment can easily contaminate the camera lens, which will cause errors in the robot's visual perception system. However, the current GAN-based image inapinting methods usually generate wrong textures, which are quite different from the real scenes. To address this problem, we propose an inpainting method that uses stereo image information as prior knowledge. In this work, we first use a multistage feature alignment module to align the left and right feature maps, and use the dynamic fusion module to fuse the two features after each alignment stage. After that, we use the feature refinement module to refine the texture information of the feature maps to generate more refined details. Experiments show that our method surpasses previous work, and achieves a higher PSNR and SSIM evaluation metrics while running over 25 fps on single NVIDIA GeForce RTX 2080 Ti, which provides effective self-inpainting capabilities for the binocular vision system of UGVs to keep driving safety.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.