AbstractDistributed acoustic sensing (DAS) presents challenges and opportunities for seismological research and data management. This study explores wavefield reconstruction using deep learning methods for data compression and wavefield separation. We test various architectures to treat DAS data as two‐dimensional arrays, such as the implicit neural representation (INR) models and the SHallow REcurrent Decoder (SHRED) model. The INR models present better data compression ability but do not generalize over space and time, a major practical limitation. On the other hand, SHRED generalizes over space and time for a single optical fiber with data from 20% decimated channels for the reconstruction. Despite good performance in reconstructing long‐wavelength features, the shallow recurrent decoder does not reconstruct transient earthquake wavefields at shorter wavelengths, limiting its usability for seismic data transmission. Nevertheless, we leverage wavefield reconstruction of ocean waves to separate them from the seismic wavefield and improve seismological use cases for earthquake detection and Earth imaging. In summary, as a lightweight deep learning model, SHRED is well suited for wavefield separation and lossy compression of the DAS data.