Abstract
This paper proposes a novel object detection method based on the visual saliency model in order to reliably detect objects such as rocks from single monocular planetary images. The algorithm takes advantage of the relatively homogeneous and distinct albedos present in planetary environments such as Mars or the Moon to extract a Digital Terrain Model of a scene using photoclinometry. The Digital Terrain Model is then incorporated into a bottom-up visual saliency algorithm to augment objects that protrude out of the ground. This Structure Augmented Monocular Saliency algorithm (SAMS) improves the accuracy and reliability of detecting objects in a planetary environment with no training requirements, greater robustness and lower computational complexity than 3D saliency models. Comprehensive analysis of the proposed method is performed using three challenging benchmark datasets. The results show that the Structure Augmented Monocular Saliency (SAMS) algorithm performs better than against commonly used visual saliency models on the same datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.