In non-line-of-sight (NLOS) imaging, the spatial information of hidden targets is reconstructed from the time-of-light (TOF) of the multiple bounced signal photons. The need for NLOS imagers to perform extensive scanning in the transverse spatial dimensions constrains the imaging speed and reconstruction quality while limiting their applications on static scenes. Utilizing a photon TOF histogram with picosecond temporal resolution, we develop compressive non-line-of-sight imaging enabled by deep learning. Two-dimensional images ($32\ifmmode\times\else\texttimes\fi{}32$ pixels) of the NLOS targets can be reconstructed with superior reconstruction quality via a convolutional neural network (CNN), using significantly downscaled data ($8\ifmmode\times\else\texttimes\fi{}8$ scanning points) at a downsampling ratio of $6.25\mathrm{%}$ compared to the traditional methods. The CNN is end-to-end trained purely using simulated data but robust for image reconstruction with experiment data. Our results suggest that deep learning is effective for reducing the scanning points and total capture time towards scanningless NLOS imaging and videography.