Abstract

The application of machine learning to a task often necessitates the production of synthetic training data. Some tasks involve rare, but important, scenarios that may not yet have been observed; others are difficult to collect or annotate in large volumes. These difficulties are particularly acute in computer vision applications to scientific imagery, in which human annotation is complicated by noise, ambiguity, and interpretation. One such application is the detection of resident space objects (RSOs) in electro-optical images for space domain awareness (SDA). In many cases, the mislabeling of RSOs by an imperfect annotator (human or machine) can be detrimental to machine learning model performance, especially when the signal-to-noise (SNR) is near or below human detection levels. In this work we introduce SatSim, a modular electro-optical synthetic data generation engine designed to procedurally generate representative, annotated synthetic electro-optical imagery of remote space scenes. SatSim enables rapid generation of synthetic data through Graphics Processing Unit (GPU) acceleration with TensorFlow. This paper discusses the use of SatSim to enhance machine learning approaches and reports the performance of models trained with real data, synthetic data, and real data augmented with synthetic RSOs. In addition, we explore using SatSim to evaluate current state-of-the-art RSO detection algorithms with new sensors (such as all-sky and event-based) and rare but critically important scenarios (such as satellite breakups and collisions) for which limited real data are available.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call