Abstract
In extreme environments where visible imaging is limited, infrared imaging is often used to assist imaging. However, infrared images lack detailed semantic information and have low contrast, and may not be suitable for direct observation by humans or for practical tasks. Therefore, overcoming the significant differences between the two modalities and realizing the transfer of infrared to visible videos will help to better utilize infrared images. Based on this, we proposed a one side end-to-end infrared-to-visible video translation framework, EADS, that uses our edge-assisted generation and dual similarity loss to preserve the scene structure information to the maximum extent and realize the translation of infrared videos into realistic, detailed, and temporally and spatially coherent visible light videos. Experiments show that our translated videos can be used in tasks such as object detection and image fusion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.