Abstract
As random noises contained in the high-frequency data could interfere with the feature learning of deep networks, low-pass filtering or wavelet transform have been integrated with deep networks to exclude the high-frequency component of input image. However, useful image details like contour and texture are also lost in such a process. In this paper, we propose Dual-branch interactive Cross-frequency attention Network (DiCaN) to separately process low-frequency and high-frequency components of input image, such that useful information is extracted from high-frequency data and included in deep learning. Our DiCaN first decomposes input image into low-frequency and high-frequency components using wavelet decomposition, and then applies two parallel residual-style branches to extract features from the two components. We further design an interactive cross-frequency attention mechanism, to highlight the useful information in the high-frequency data and interactively fuse them with the features in low-frequency branch. The features learned by our framework are then applied for both image classification and object detection and tested using ImageNet-1K and COCO datasets. The results suggest that DiCaN achieves better classification performance than different ResNet variants. Both one-stage and two-stage detectors with our DiCaN backbone also achieve better detection performance than that with ResNet backbone. The code of DiCaN will be released.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.