Abstract

Space situational awareness (SSA) is becoming increasingly challenging with the proliferation of resident space objects (RSOs), ranging from CubeSats to mega-constellations. Sensors within the United States Space Surveillance Network are tasked to repeatedly detect, characterize, and track these RSOs to retain custody and estimate their attitude. The majority of these sensors consist of ground-based sensors with a narrow field of view and must be slew at a finite rate from one RSO to another during observations. This results in a complex combinatorial problem that poses a major obstacle to the SSA sensor tasking problem. In this work, we successfully applied deep reinforcement learning (DRL) to overcome the curse of dimensionality and optimally task a ground-based sensor. We trained several DRL agents using proximal policy optimization and population-based training in a simulated SSA environment. The DRL agents outperformed myopic policies in both objective metrics of RSOs’ state uncertainties and the number of unique RSOs observed over a 90-min observation window. The agents’ robustness to changes in RSO orbital regimes, observation window length, observer’s location, and sensor properties are also examined. The robustness of the DRL agents allows them to be applied to any arbitrary locations and scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call