Recently, deep convolutional neural networks have been applied to image compressive sensing (CS) to improve reconstruction quality while reducing computation cost. Existing deep learning-based CS methods can be divided into two classes: sampling image at single scale and sampling image across multiple scales. However, these existing methods treat the image low-frequency and high-frequency components equally, which is an obstruction to get a high reconstruction quality. This paper proposes an adaptive multi-scale image CS network in wavelet domain called AMS-Net, which fully exploits the different importance of image low-frequency and high-frequency components. First, the discrete wavelet transform is used to decompose an image into four sub-bands, namely the low-low (LL), low-high (LH), high-low (HL), and high-high (HH) sub-bands. Considering that the LL sub-band is more important to the final reconstruction quality, the AMS-Net allocates it a larger sampling ratio, while allocating the other three sub-bands a smaller one. Since different blocks in each sub-band have different sparsity, the sampling ratio is further allocated block-by-block within the four sub-bands. Then a dual-channel scalable sampling model is developed to adaptively sample the LL and the other three sub-bands at arbitrary sampling ratios. Finally, by unfolding the iterative reconstruction process of the traditional multi-scale block CS algorithm, we construct a multi-stage reconstruction model to utilize multi-scale features for further improving the reconstruction quality. Experimental results demonstrate that the proposed model outperforms both the traditional and state-of-the-art deep learning-based methods.
Read full abstract