Abstract

Bringing autonomy on board edge devices is inevitable to accelerate the process of space exploration. Although there are various tasks that can be executed autonomously by such vehicles, detecting and segmenting rocks in on-board images of extraterrestrial landscapes is a critical step in the processing chain, as it can allow to navigate safely while avoiding collisions. We tackle this issue and introduce an end-to-end pipeline for building and validating resource-frugal machine learning techniques for this task, offering a high level of flexibility. Deploying such models on board edge devices poses numerous practical challenges, spanning across ensuring their memory and computational efficiency, and understanding their robustness against varying quality of acquired images. These aspects are often overlooked while building deep learning-powered on-board systems—we show that they can (and ultimately should) be a part of the deployment chain. Our extensive experimental study performed over several benchmark data sets shed more light on functional and non-functional capabilities of the investigated models, both in full-precision and compressed by quantisation, the latter delivering statistically same segmentation accuracy while being approximately 11× smaller. Additionally, we show that synthesised images can be utilised to quantify the robustness of deep learning models against on-board acquisition conditions which directly affect the quality of captured images—such simulations mimicking real-world acquisition settings could have a negative impact on the capabilities of the models trained over clean and high-quality image data. To ensure full reproducibility of this study, we made our implementation publicly available at https://github.com/danielmarek22/onboard-rock-segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call