Abstract

The latest advancements in satellite technology have allowed us to obtain video imagery from satellites. Nanosatellites are becoming widely used for earth-observing missions as they require a low budget and short development time. Thus, there is a real interest in using nanosatellites with a video payload camera, especially for disaster monitoring and fleet tracking. However, as video data requires much storage and high communication costs, it is challenging to use nanosatellites for such missions. This paper proposes an effective onboard deep-learning-based video scene analysis method to reduce the high communication cost. The proposed method will train a CNN+LSTM-based model to identify mission-related sceneries such as flood-disaster-related scenery from satellite videos on the ground and then load the model onboard the nanosatellite to perform the scene analysis before sending the video data to the ground. We experimented with the proposed method using Nvidia Jetson TX2 as OBC and achieved an 89% test accuracy. Additionally, by implementing our approach, we can minimize the nanosatellite video data download cost by 30% which allows us to send the important mission video payload data to the ground using S-band communication. Therefore, we believe that our new approach can be effectively applied to obtain large video data from a nanosatellite.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call