Abstract

Recently, visible and infrared thermal (RGB-T) images have drawn wide publicity for vehicle fusion detection in traffic monitoring because of their strong complementarities. How to fully use RGB-T images for vehicle fusion detection has generated enormous publicity. However, the infrared thermal dataset is relatively insufficient. Moreover, the most important requirements of vehicle fusion detection are accuracy, fast speed, and flexibility. Therefore, to address these difficulties, we propose a concise and flexible vehicle fusion detection method in RGB-T images via spare network and dynamic weight coefficient-based Dempster–Shafer (D–S) evidence theory. It combines the detection results of RGB-T images based on a decision-level vehicle fusion strategy. In this work, we focus on vehicle detection using infrared thermal images and the vehicle fusion detection strategy. For the former, we construct an applicable network for vehicle detection in infrared thermal images with sparse parameters (weights) and high generalization ability. For the latter, a fusion strategy via dynamic weight coefficient-based D–S evidence theory is proposed to fuse the two detection results of the RGB-T images. In the vehicle fusion detection strategy, we do not directly fuse the two detection results but judge the detection accuracy in advance. Finally, we introduce the VIVID, VOT2019, and RGBT234 datasets to verify the proposed vehicle fusion detection method. The vehicle fusion detection results show that the proposed method presents superior results compared with several mainstream approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call