Abstract

In this paper, we propose a target detection algorithm based on adversarial discriminative domain adaptation for infrared and visible image fusion using unsupervised learning methods to reduce the differences between multimodal image information. Firstly, this paper improves the fusion model based on generative adversarial network and uses the fusion algorithm based on the dual discriminator generative adversarial network to generate high-quality IR-visible fused images and then blends the IR and visible images into a ternary dataset and combines the triple angular loss function to do migration learning. Finally, the fused images are used as the input images of faster RCNN object detection algorithm for detection, and a new nonmaximum suppression algorithm is used to improve the faster RCNN target detection algorithm, which further improves the target detection accuracy. Experiments prove that the method can achieve mutual complementation of multimodal feature information and make up for the lack of information in single-modal scenes, and the algorithm achieves good detection results for information from both modalities (infrared and visible light).

Highlights

  • With the rapid development of deep learning, the task of target detection in computer vision tasks has made great progress

  • In response to the above problems, this paper starts from the perspective of the adversarial discriminant domain [4], uses an unsupervised learning method to reduce the modal difference between bimodal images, and proposes a modal information fusion detection algorithm based on a generative adversarial network

  • In the improved generative confrontation network, the generator is designed with local detail features and global semantic features to extract source image details and semantic information, and perceptual loss is added to the discriminator to keep the data distribution of the fused image consistent with the source image and improve fusion image accuracy. e fused features enter the interest pooling network for rough classification, and the generated candidate frame is mapped to the feature map, and the target classification and positioning are completed through the fully connected layer

Read more

Summary

Introduction

With the rapid development of deep learning, the task of target detection in computer vision tasks has made great progress. With the successful application of deep convolutional neural networks in target detection tasks, scholars have produced many excellent results in multimodal research. E author uses a convolutional neural network to fuse two modal information and discusses the impact of different fusion stages on the target detection results [1]. In response to the above problems, this paper starts from the perspective of the adversarial discriminant domain [4], uses an unsupervised learning method to reduce the modal difference between bimodal images, and proposes a modal information fusion detection algorithm based on a generative adversarial network. In the improved generative confrontation network, the generator is designed with local detail features and global semantic features to extract source image details and semantic information, and perceptual loss is added to the discriminator to keep the data distribution of the fused image consistent with the source image and improve fusion image accuracy. In the improved generative confrontation network, the generator is designed with local detail features and global semantic features to extract source image details and semantic information, and perceptual loss is added to the discriminator to keep the data distribution of the fused image consistent with the source image and improve fusion image accuracy. e fused features enter the interest pooling network for rough classification, and the generated candidate frame is mapped to the feature map, and the target classification and positioning are completed through the fully connected layer

Algorithm Structure
Target Detection Model
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.