Abstract

Multimodal image fusion is an important area of research with various applications in computer vision. This research proposes a modification to convolutional layers by fusing two different modalities of images. A novel architecture that uses adaptive fusion mechanisms to learn the optimal weightage of different modalities at each convolutional layer is introduced in the research. The proposed method is evaluated on a publicly available dataset, and the experimental results show that the performance of the proposed method outperforms state-of-the-art methods in terms of various evaluation metrics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.