Abstract

Stereo cameras allow mobile robots to perceive depth in their surroundings by capturing two separate images from slightly different perspectives. This is necessary for tasks such as obstacle avoidance, navigation, and spatial mapping. By utilizing a convolutional neural network (CNN), existing works in stereo cameras based on depth estimation have achieved superior results. However, the critical requirement for depth estimation for mobile robots is to have an optimal tradeoff between computational cost and accuracy. To achieve such a tradeoff, attention-aware feature aggregation (AAFS) has been proposed for real-time stereo matching on edge devices. AAFS includes multistage feature extraction, an attention module, and a 3D CNN architecture. However, its 3D CNN architecture learns contextual information ineffectively. In this paper, a deep encoder–decoder architecture is applied to an AAFS 3D CNN to improve depth estimation accuracy. Through evaluation, it is proven that the proposed 3D CNN architecture provides significantly better accuracy while keeping the inference time comparable to that of AAFS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call