With the increasing need for real-time crowd evaluation in military surveillance, public safety, and event crowd management, crowd counting using unmanned aerial vehicle (UAV) captured images has emerged as an essential research topic. While conventional RGB-based methods have achieved significant success, their performance is severely hampered in low-light environments due to poor visibility. Integrating thermal infrared (TIR) images can address this issue, but existing RGB-T crowd counting networks, which employ multi-stream architectures, tend to introduce computational redundancy and excessive parameters, rendering them impractical for UAV applications constrained by limited onboard resources. To overcome these challenges, this research introduces an innovative, compact RGB-T framework designed to minimize redundant feature processing and improve multi-modal representation. The proposed approach introduces a Partial Information Interaction Convolution (PIIConv) module to selectively minimize redundant feature computations and a Global Collaborative Fusion (GCFusion) module to improve multi-modal feature representation through spatial attention mechanisms. Empirical findings indicate that the introduced network attains competitive results on the DroneRGBT dataset while significantly reducing floating-point operations (FLOPs) and improving inference speed across various computing platforms. This study’s significance is in providing a computationally efficient framework for RGB-T crowd counting that balances accuracy and resource efficiency, making it ideal for real-time UAV deployment.
Read full abstract