Abstract

Fast and accurate 3D scene perception is a crucial prerequisite for the autonomous navigation and harvesting of combine harvesters. However, crop field scenarios pose severe challenges for vision-based perception systems due to repetitive scenes, illumination changes and real-time constraints on embedded computing platforms. In this paper, we propose a feature-based, two-stage approach for real-time dense 3D mapping for combine harvesters. In the first stage, our approach constructs a sparse 3D map using reliable feature matching, which provides prior knowledge about the environment. In the second stage, our method formulates per-pixel disparity calculation as probabilistic inference. The key to our approach is the ability to compute dense 3D maps by combining Bayesian estimation with efficient and discriminative point cues from images, exhibiting tolerance against visual measurement uncertainties due to repetitive textures and uneven lighting in crop fields. We validate the performance of the proposed method using real crop field data, and the results demonstrate that our dense 3D maps provide detailed spatial metric information while maintaining a balance between accuracy and efficiency. This makes our approach highly valuable for online perception in combine harvesters operating with resource-limited systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call