Abstract

Image matching involves identifying the template region in the search image after selecting it in the template image. Image matching, as a basic task, is a pre-order step of many image processing. Because of the different imaging principles of visible and infrared images, the difference between visible and infrared images is large. Compared with the matching of visible images, it is more difficult to match visible and infrared images. Due to the characteristics of infrared images, many targets in infrared images do not have clear outlines, which makes it difficult to distinguish targets and backgrounds in infrared images during the matching process. Regarding the issue above, we propose a network named VINet for visible infrared image matching, and design a feature extraction network AM-Net based on Inception-v3. In order to improve the feature expression ability of the network, we added a parameter-free attention mechanism to the AM-Net network, which improved the network expression ability without introducing new parameters. In addition, we added DropBlock to the AM-Net network, which can achieve better regularization through the DropBlock. To accurately differentiate the background from the target and obtain the target position, we integrate target-aware graph attention in VINet and employ the CIoU loss during training. To address the limited number of visible and infrared image matching datasets, we repurpose existing datasets by relabeling them to obtain new training and testing sets. The experimental results indicate that our method can achieve better matching results compared with other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call