Abstract
Infrared image target detection has always been a hot topic of research, but there is still little research on infrared image target detection in the field of transportation. In this paper, we use the idea of transfer learning to transfer the target detection framework in the visible domain of deep learning to the infrared domain, and propose the target detection model CMF Net based on multi-scale feature fusion. CMF Net uses two multi-scale feature extraction mechanisms and features fusion, so that the final output feature map of the backbone network contains not only low-level visual features which are beneficial to target localization, but also high-level semantic features which are beneficial to target recognition, and can adapt to multi-scale features of the target. The experiment verified the advantages of CMF Net, and its mAP on the test data of the infrared image data set FLIR reached about 71%. This result is an increase of about 13% compared to Faster R-CNN, an increase of about 6% compared to YOLO3, and an increase of about 17% compared to SSD.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.