Abstract

AbstractSpace situational awareness systems primarily focus on detecting and tracking space objects, providing crucial positional data. However, understanding the complex space domain requires characterising satellites, often involving estimation of bus and solar panel sizes. While inverse synthetic aperture radar allows satellite visualisation, developing deep learning models for substructure segmentation in inverse synthetic aperture radar images is challenging due to the high costs and hardware requirements. The authors present a framework addressing the scarcity of inverse synthetic aperture radar data through synthetic training data. The authors approach utilises a few‐shot domain adaptation technique, leveraging thousands of rapidly simulated low‐fidelity inverse synthetic aperture radar images and a small set of inverse synthetic aperture radar images from the target domain. The authors validate their framework by simulating a real‐case scenario, fine‐tuning a deep learning‐based segmentation model using four inverse synthetic aperture radar images generated through the backprojection algorithm from simulated raw radar data (simulated at the analogue‐to‐digital converter level) as the target domain. The authors results demonstrate the effectiveness of the proposed framework, significantly improving inverse synthetic aperture radar image segmentation across diverse domains. This enhancement enables accurate characterisation of satellite bus and solar panel sizes as well as their orientation, even when the images are sourced from different domains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call