Hyperspectral imaging is able to provide a finer delivery of various material properties than conventional imaging systems. Yet in reality, an optical system can only generate data with high spatial resolution but low spectral one, or vice versa, at video rates. As a result, an issue that fuses low-resolution hyperspectral (LRHS) and high-resolution multispectral (HRMS) images has gained great attention. However, most fusion approaches depend purely on hand-crafted regularizers or data-driven priors, leading to the issues of tricky parameter selection or poor interpretability. In this work, a subspace-based deep prior regularization (SDPR) is proposed to tackle these problems, which takes both hand-crafted regularizer and data-driven prior into account. Specifically, we leverage the spectral correlation of the images and transfer them from the original space to the subspace domain, within which a modified U-net-based deep prior learning network (SDPL-net) is designed for the fusion issue. Moreover, instead of taking the output of SDPL-net directly as the result, we further feed the output back to the model-based optimization. Under such prior regularization, the recovered high-resolution hyperspectral (HRHS) image holds a high consistency to its inherent structure and hence tends to present enhanced reliability and accuracy. Experimental results on simulated and real data reveal that the proposed method excels other state-of-the-art (SOTA) methods in both quantitative and qualitative metrics.