Abstract

3D face reconstruction from single-view images plays an important role in the field of biometrics, which is a long-standing challenging problem in the wild. Traditional 3DMM-based methods directly regressed parameters, which probably caused that the network learned the discriminative informative features insufficiently. In this paper, we propose a replay attention and data augmentation network (RADAN) for 3D dense alignment and face reconstruction. Unlike the traditional attention mechanism, our replay attention module aims to increase the sensitivity of the network to informative features by adaptively recalibrating the weight response in the attention, which typically reinforces the distinguishability of the learned feature representation. In this way, the network can further improve the accuracy of face reconstruction and dense alignment in unconstrained environments. Moreover, to improve the generalization performance of the model and the ability of the network to capture local details, we present a data augmentation strategy to preprocess the sample data, which generates the images that contain more local details and occluded face in cropping and pasting manner. Furthermore, we also apply the replay attention to 3D object reconstruction task to verify the commonality of this mechanism. Extensive experimental results on widely-evaluated datasets demonstrate that our approach achieves competitive performance compared to state-of-the-art methods. Code is available at https://github.com/zhouzhiyuan1/RADANet.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.