On rainy days the uncertainty of the shape and distribution of rain streaks can cause the images captured by RGB image-based measurement tools to be blurred and distorted. Thanks to the wavelet transform ability to provide spatial and frequency domain information about an image and its multidirectional and multiscale nature, it is widely used in traditional image enhancement methods. In image deraining the distribution of rain streaks is not only related to spatial domain features but is also closely related to frequency domain spatial features. However, deep learning-based rain removal models mainly rely on the spatial features of the image, and RGB data can hardly distinguish rain marks from image details, which leads to the loss of crucial image information during rain removal. We have developed a lightweight single-image rain removal model called the deep wavelet transform network (DWTN) to address this limitation. This method separates image details from rain images and can more effectively remove rain marks. The proposed DWTN has three significant contributions. First, DWTN uses the feature components after the wavelet transform as the input to the model and assigns a separate frequency-aware enhancement block (FAEB) to each element. These blocks focus on specific frequency features that benefit the rain removal task. Second, we introduce a frequency feature fusion block (FFFB) that fuses different wavelet components to reduce noise and enhance the image background through a channel attention mechanism while attenuating rain streaks. Finally, we design a spatial feature enhancement block (SFEB), which uses a spatial attention mechanism to calibrate the spatial position of features to improve the rain removal performance. We evaluate the performance of DWTN using PSNR and SSIM on four synthetic datasets and NIQE and BRISQUE on two real datasets. The results of the evaluation of the six datasets and at least four performance metrics show that the proposed DWTN is superior to existing methods.