Lensless imaging, as a novel computational imaging technique, has attracted great attention due to its simplicity, compactness, and flexibility. This technique analyzes and processes the diffraction of an object to obtain complex amplitude information. However, traditional algorithms such as Gerchberg‐Saxton (G–S) algorithm tend to exhibit significant errors in complex amplitude retrieval, particularly for edge information. Additional constraints have to be incorporated on top of amplitude constraints to enhance the accuracy. Recently, deep learning has shown promising results in optical imaging. However, it requires a large amount of training data. To address these issues, a novel approach called dual‐input physics‐driven network (DPNN) is proposed for lensless imaging. DPNN utilizes two diffractions recorded at different distances as inputs and uses an unsupervised approach that combines physical imaging model to reconstruct object information. DPNN adopts a U‐Net 3+ architecture with a loss function of mean absolute error (MAE) to better capture diffraction features. DPNN achieves highly accurate reconstruction without requiring extensive data and being immune to background noise. Based on different diffraction intervals, noise levels, and imaging models, DPNN exhibits superior capabilities in peak signal‐to‐noise ratio and structural similarity compared with conventional methods, effectively achieving accurate phase or amplitude information reconstruction.