Abstract
Recently, learning-based multi-exposure fusion (MEF) methods have made significant improvements. However, these methods mainly focus on static scenes and are prone to generate ghosting artifacts when tackling a more common scenario, i.e., the input images include motion, due to the lack of a benchmark dataset and solution for dynamic scenes. In this paper, we fill this gap by creating an MEF dataset of dynamic scenes, which contains multi-exposure image sequences and their corresponding high-quality reference images. To construct such a dataset, we propose a 'static-for-dynamic' strategy to obtain multi-exposure sequences with motions and their corresponding reference images. To the best of our knowledge, this is the first MEF dataset of dynamic scenes. Correspondingly, we propose a deep dynamic MEF (DDMEF) framework to reconstruct a ghost-free high-quality image from only two differently exposed images of a dynamic scene. DDMEF is achieved through two steps: pre-enhancement-based alignment and privilege-information-guided fusion. The former pre-enhances the input images before alignment, which helps to address the misalignments caused by the significant exposure difference. The latter introduces a privilege distillation scheme with an information attention transfer loss, which effectively improves the deghosting ability of the fusion network. Extensive qualitative and quantitative experimental results show that the proposed method outperforms state-of-the-art dynamic MEF methods. The source code and dataset are released at https://github.com/Tx000/Deep_dynamicMEF.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.