Probing the issue of phase retrieval has attracted researchers for many years, due to its wide range of application. Phase retrieval aims to recover an unknown signal from phase-free measurements. Classical alternative projection algorithms have the significant advantages of simplicity and few fine-tuning parameters. However, they suffer from non-convexity and often get stuck in local minima in the presence of noise disturbance. In this work, we develop an efficient hybrid model-based and data-driven approach to solve the phase retrieval problem with deep priors. To effectively utilize the inherent image priors, we propose a deep non-iterative (unfolded) network based on the classic HIO method, referred to as HIONet, which can adaptively learn inherent priors from the truth data distribution. Particularly, we replace the projection operator with trainable deep network, and as a result that learning parameterized function with weights in a supervised manner is equal to learning the prior knowledge from data with truth distributions. In turn, the deep priors learned during training enforce the unfolded network to obtain the optimal solution for phase retrieval problem. In the pipeline of our method, deep priors are incorporated with the physical image formation algorithm, so that the proposed HIONet benefits from the representational capabilities of deep networks, as well as the interpretability and versatility of the traditional well-established algorithms. Moreover, inspired by compounding and aggregating diverse representations to benefit the network for more accurate inference, an enhanced version with cross-blocks features fusion, referred to as HIONet+, is designed to further improve the reconstruction. Extensive experimental results on noisy phase-free measurements show that the developed methods outperform the competitors in terms of quantitative metrics such as PSNR, SSIM and visual effects at all noise levels. In addition, non-oversampling sparse phase retrieval experiments consistently demonstrate that our methods outperform compared methods.