Abstract

The minimum resolvable temperature difference (MRTD) at which a four-rod target can be resolved is a critical parameter used to assess the comprehensive performance of thermal imaging systems, which is important for technological innovation in military and other fields. Recently, there have been some attempts to use an automatic objective approach based on deep learning to take the place of the classical manual subjective MRTD measurement approach, which is strongly affected by the psychological subjective factors of the experimenter and is limited in accuracy and speed. However, the scale variability of four-rod targets and the low pixels of infrared thermal cameras have turned out to be a challenging problem for automatic MRTD measurement. We propose a multiscale deblurred feature extraction network (MDF-Net), a backbone based on a yolov5 neural network, in an attempt to solve the aforementioned problem. We first present a global attention mechanism (GAM) attention module to represent strong images of the four-rod targets. Next, a Rep VGG module is introduced to decrease the blur. Our experiments show that the proposed method achieves the desired effect and state-of-the-art detection results, which innovatively improve the accuracy of four-rod target detection to 82.3% and thus make it possible for the thermal imagers to see further and to respond faster and more accurately.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.