In the past few years, deep learning has achieved remarkable advancements in the area of image compression. Remote sensing image compression networks focus on enhancing the similarity between the input and reconstructed images, effectively reducing the storage and bandwidth requirements for high-resolution remote sensing images. As the network’s effective receptive field (ERF) expands, it can capture more feature information across the remote sensing images, thereby reducing spatial redundancy and improving compression efficiency. However, the majority of these learned image compression (LIC) techniques are primarily CNN-based and transformer-based, often failing to balance the global ERF and computational complexity optimally. To alleviate this issue, we propose a learned remote sensing image compression network with visual state space model named VMIC to achieve a better trade-off between computational complexity and performance. Specifically, instead of stacking small convolution kernels or heavy self-attention mechanisms, we employ a 2D-bidirectional selective scan mechanism. Every element within the feature map aggregates data from multiple spatial positions, establishing a globally effective receptive field with linear computational complexity. We extend it to an omni-selective scan for the global-spatial correlations within our Channel and Global Context Entropy Model (CGCM), enabling the integration of spatial and channel priors to minimize redundancy across slices. Experimental results demonstrate that the proposed method achieves superior trade-off between rate-distortion performance and complexity. Furthermore, in comparison to traditional codecs and learned image compression algorithms, our model achieves BD-rate reductions of −4.48%, −9.80% over the state-of-the-art VTM on the AID and NWPU VHR-10 datasets, respectively, as well as −6.73% and −7.93% on the panchromatic and multispectral images of the WorldView-3 remote sensing dataset.
Read full abstract