Hallucinating a photo-realistic frontal face image from a low-resolution (LR) non-frontal face image is beneficial for a series of face-related applications. However, previous efforts either focus on super-resolving high-resolution (HR) face images from nearly frontal LR counterparts or frontalizing non-frontal HR faces. It is necessary to address all these challenges jointly for real-world face images in unconstrained environment. In this paper, we develop a novel Cross-view Information Interaction and Feedback Network (CVIFNet), which simultaneously handles the non-frontal LR face image super-resolution (SR) and frontalization in a unified framework and interacts them with each other to further improve their performance. Specifically, the CVIFNet is composed of two feedback sub-networks for frontal and profile face images. Considering the reliable correspondence between frontal and non-frontal face images can be crucial and contribute to face hallucination in a different manner, we design a cross-view information interaction module (CVIM) to aggregate HR representations of different views produced by the SR and frontalization processes to generate finer face hallucination results. Besides, since 3D rendered facial priors contain rich hierarchical features, such as low-level (e.g., sharp edge and illumination) and perception level (e.g., identity) information, we design an identity-preserving consistency loss based on 3D rendered facial priors, which can ensure that the high-frequency details of frontal face hallucination result are consistent with the profile. Extensive experiments demonstrate the effectiveness and advancement of CVIFNet.