Abstract

The You Only Look Once (YOLO) series has been widely adopted across various domains. With the increasing prevalence of continuous satellite observation, the resulting video streams can be subjected to intelligent analysis for various applications, such as traffic flow statistics, military operations, and other fields. Nevertheless, the signal-to-noise ratio of objects in satellite videos is considerably low, and their size is often smaller, ranging from tens to one percent, when compared to those taken by drones and other equipment. Consequently, the original YOLO algorithm’s performance is inadequate when detecting tiny objects in satellite videos. Hence, we propose an improved framework, named HB-YOLO. To enable the backbone to extract features, we replaced the universal convolution with an improved HorNet that enables higher-order spatial interactions. We replaced all Extended Efficient Layer Aggregation Networks (ELANs) with the BoTNet attention mechanism to make the features fully fused. In addition, anchors were re-adjusted, and image segmentation was integrated to achieve detection results, which are tracked using the BoT-SORT algorithm. The experimental results indicate that the original algorithm failed to learn using the satellite video dataset, whereas our proposed approach yielded improved recall and precision. Specifically, the F1-score and mean average precision increased to 0.58 and 0.53, respectively, and the object-tracking performance was enhanced by incorporating the image segmentation method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call