Abstract

Benchmarking deep learning algorithms before deploying them in hardware-constrained execution environments, such as imaging satellites, is pivotal in real-life applications. Although a thorough and consistent benchmarking procedure can allow us to estimate the expected operational abilities of the underlying deep model, this topic remains under-researched. This paper tackles this issue and presents an end-to-end benchmarking approach for quantifying the abilities of deep learning algorithms in virtually any kind of on-board space applications. The experimental validation, performed over several state-of-the-art deep models and benchmark datasets, showed that different deep learning techniques may be effectively benchmarked using the standardized approach, which delivers quantifiable performance measures and is highly configurable. We believe that such benchmarking is crucial in delivering ready-to-use on-board artificial intelligence in emerging space applications and should become a standard tool in the deployment chain.

Highlights

  • Computing hardware platforms are mainly used in space missions for data transfer that involves sending raw data to ground stations, while data processing is done in centralized servers on Earth

  • AI-dedicated computing hardware platforms have become increasingly important in a variety of space applications as they provide a set of features to allow for efficient computation tasks

  • This paper presents the approach for benchmarking state-of-the-art AI models for a range of space applications focused on computer vision tasks, such as image classification or object detection

Read more

Summary

Introduction

Computing hardware platforms are mainly used in space missions for data transfer that involves sending raw data (e.g., imagery and video streams) to ground stations, while data processing is done in centralized servers on Earth. AI-dedicated computing hardware platforms have become increasingly important in a variety of space applications as they provide a set of features to allow for efficient computation tasks. Deep learning inference can be effectively executed very close to the data sources (e.g., imaging sensors) at the edge device. This may improve the spacecraft autonomy, mitigate a costly requirement of transmitting large volumes of data to a remote processing site, accelerate decision-making, and increase reliability in communication-denied environments (i.e., in space). Each piece of computing hardware comes with inherent features and constraints that enable or disrupt performance depending on the deep model and targeted applications. Deep learning models must be adopted to make “optimal” use of the underlying hardware infrastructure

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call