Abstract
With the rapid development of image processing technology, it has become increasingly easy to manipulate images, which poses a threat to the stability and security of people’s lives. Recent methods have proposed the fusion of RGB and noise features to uncover tampering traces. However, these approaches overlook the characteristics of features at different levels, leading to insufficient feature fusion. To address this problem, this paper proposes a double-stream multilevel feature fusion network (DMFF-Net). Unlike the traditional feature fusion approach, DMFF-Net adopts a graded feature fusion strategy. It classifies features into primary, intermediate, and advanced levels and introduces the Primary Feature Fusion Module (PFFM) and the Advanced Feature Fusion Module (AFFM) to achieve superior fusion results. Additionally, a multisupervision strategy is employed to decode the fused features into level-specific masks, including boundary, regular, and refined masks. The DMFF-Net is validated on publicly available datasets, including CASIA, Columbia, COVERAGE, and NIST16, as well as a real-life manipulated image dataset, IMD20, and achieves AUCs of 84.7%, 99.6%, 86.6%, 87.4% and 82.8%, respectively. Extensive experiments show that our DMFF-Net outperforms state-of-the-art methods in terms of image manipulation localization accuracy and exhibits improved robustness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Engineering Applications of Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.