Abstract

Face hallucination methods based on low-resolution (LR) and high-resolution (HR) dictionary pair scheme infer HR patches by directly reusing coding coefficients trained by LR patches over LR dictionary. This scheme implies that LR and HR patch manifolds share highly similar local geometric structure. However, latest preliminary studies argue that the manifold assumption does not hold well such that face hallucination performance inevitably suffers from inconsistency of coding coefficients between LR and HR patches. In this paper, we are the first to observe that coding coefficients of LR patches are more relevant to latent those of HR patches under conditions of involving small magnifying factor. On the basis of this finding, we suggest a stepwise reconstruction scheme to minimize inconsistency risk in solution space. In particular, this scheme divides face hallucination process into multiple cascaded incremental training-synthesis steps, in which each individual step allows smaller magnifying factor as well as the corresponding intermediate resolution (IR) dictionary rather than merely LR and HR dictionary based learning. Moreover, in order to keep sparse representation (SR) sufficiently sparse while favoring its locality, we introduce a weighted l1/l2 mixed norms minimization SR method and formulate a unified framework together with stepwise scheme. Experiments on commonly used face database demonstrate that our framework achieves state-of-the-art results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call