Abstract

This paper presents a novel stereo matching algorithm Cyclops2. The algorithm produces a disparity image, provided two rectified grayscale images. The matching is based on the concept of minimising a weight function calculated using the absolute difference of pixel intensities. We present three simple and easily parallelizable weight functions. Each presented function gives a different trade-off between algorithm processing time and reconstructed depth image accuracy. Detailed description of the algorithm implementation in CUDA is provided. The implementation was specifically optimised for embedded NVIDIA Jetson platform. NVIDIA Jetson TK1 and TX1 boards have been used to evaluate the algorithms. We evaluated seven algorithm variations with different parameter values. Each variation results in a different speed accuracy trade-off, demonstrating that our algorithm can be used in various situations. The presented algorithm achieves up to 70 FPS processing time on lower resolution images (750 × 500 pixels) and up to 23 FPS on high-resolution images (1500 × 1000 pixels). The use of optional post-processing stage (median filter) has also been investigated. We conclude that despite its limitations, our algorithm is relevant in the field of real-time obstacle avoidance.

Highlights

  • The rapid advance of computational power has had a significant influence on the development of autonomous vehicles

  • There are several types of sensors commonly used for 3D environment sensing and obstacle avoidance, such as light detection and ranging (LIDAR), structuredlight, Time-of-Flight (TOF) and stereo cameras

  • We mainly focus on cost computation functions optimised for embedded graphics processing unit (GPU) platforms NVIDIA Jetson TK1 and TX1

Read more

Summary

Introduction

The rapid advance of computational power has had a significant influence on the development of autonomous vehicles. Over the last decade, unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) have been reaching higher degrees of autonomy, deploying more complex algorithms and sensors. Obstacle avoidance remains a challenge when considering real life applications. There are several types of sensors commonly used for 3D environment sensing and obstacle avoidance, such as light detection and ranging (LIDAR), structuredlight, Time-of-Flight (TOF) and stereo cameras. One of the most popular structured-light sensors often used in research is Microsoft’s Kinect. Structured-light sensors project known patterns onto the environment in order to estimate scene depth information. Structuredlight sensors have successfully been used to solve many computer vision problems in indoor environments [1].

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.