Abstract
In recent years, patch-based face hallucination algorithms have attracted considerable interest due to their effectiveness. These approaches produce a high-resolution (HR) face image according to the corresponding low-resolution (LR) input by learning a reconstruction model from the given training image set. The critical problem in these algorithms is establishing the underlying relationship between LR and HR patch pairs. Most previous methods aim to denote each input LR patch by the linear combination of the training set in the LR space while utilizing the combination weights to reconstruct the target HR patch. However, this assumes that the same combination weights should be shared between various resolution spaces, which is truly difficult to satisfy because of the one-to-many mapping relation between LR and HR patches. In this paper, we directly train a series of adaptive kernel regression mappings for predicting the lost high-frequency information from the LR patch, which avoids dealing with the above difficult problem. During the training process, we first establish a local optimization function on each LR/HR training pair according to the geometric structure of neighboring patches. The objective of local optimization can be presented in two aspects: 1) ensure the reconstruction consistency between each LR patch and the corresponding HR patch and 2) preserve the intrinsic geometry between each HR training patch and its original neighbors after the reconstruction process. The local optimizations are finally incorporated as the global optimization for calculating the optimal kernel regression function. To better approximate the target HR patch, we further propose a recursive structure to compensate for the residual reconstruction error of high-frequency details by a series of regression mappings. The proposed method is rather fast yet very effective in producing HR face images. Experimental results show that the proposed approach achieves superior performance with reasonable computational time compared with the state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.