Abstract

Object detection and image fusion algorithms belong to different research fields, but both aim at obtaining target information. To effectively obtain target and scene information, image fusion and object detection often need to work together. However, most of the studies about them are carried out separately, without forming a unified framework. In other words, the two tasks of image fusion and object detection have not been completed simultaneously in one network model. To solve this problem, a multilevel (feature-level and pixel-level) fusion detection algorithm based on heterogeneous images is proposed, which is called multi-level fusion detection network (MFDetection). It can not only obtain fused images but also provide higher detection results. To our knowledge, this is the first model that only uses one network to perform object detection and multilevel (pixel level and feature level) image fusion simultaneously. In this model, multi-scale feature maps of visible (VIS) and infrared (IR) images extracted from the feature extraction network are fused and then applied to detection, which greatly improves the detection accuracy. The shared feature extraction network reduced the model complexity greatly. In addition, the MFDetection is compared and evaluated with multiple object detection methods. The experiment results show that the detection accuracy of MFDetection is significantly better than other comparative methods on multiple common datasets, and the mAP of the MFDetection is more than 28 percent higher than the most state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call