Abstract

With the help of convolutional neural networks (CNNs), deep learning-based methods have achieved remarkable performance in face super-resolution (FSR) task. Despite their success, most of the existing methods neglect non-local correlations of face images, leaving much room for improvement. In this paper, we introduce a novel end-to-end trainable attention-driven graph neural network (AD-GNN) for more discriminative feature extraction and feature relation modeling. This is achieved by two major components. The first component is a cross-scale dynamic graph (CDG) block. The CDG block considers cross-scale relationships of patches in distant areas and employs two dynamic graphs to construct enhanced features. The second component is a series of channel attention and spatial dynamic graph (CASDG) blocks. A CASDG block has a channel-wise attention unit and a spatial-aware dynamic graph (SDG) unit. The SDG unit extracts informative features by exploring spatial non-local self-similarity information of the patches using dynamic graph convolution. Using these two components, facial details can be effectively reconstructed with the help of information supplemented by similar but spatially remote patches and structural information of faces. Extensive experiments on two public benchmarks demonstrate the superiority of AD-GNN over the state-of-the-art FSR methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.