In computer vision, eliminating the effects of adverse weather conditions such as rain, snow, and fog on images is a key research challenge. Existing studies primarily focus on image restoration for single weather types, while methods addressing image restoration under multiple combined weather conditions remain relatively scarce. Furthermore, current mainstream restoration networks, mostly based on Transformer and CNN architectures, struggle to achieve an effective balance between global receptive field and computational efficiency, limiting their performance in practical applications. This study proposes ACMamba, an end-to-end lightweight network based on selective state space models, aimed at achieving image restoration under multiple weather conditions using a unified set of parameters. Specifically, we design a novel Visual State Space Module (VSSM) and a Spatially Aware Feed-Forward Network (SAFN), which organically combine the local feature extraction capabilities of convolutions with the long-range dependency modeling capabilities of selective state space models (SSMs). This combination significantly improves computational efficiency while maintaining a global receptive field, enabling effective application of the Mamba architecture to multi-weather image restoration tasks. Comprehensive experiments demonstrate that our proposed approach significantly outperforms existing methods for both specific and multi-weather tasks across multiple benchmark datasets, showcasing its efficient long-range modeling potential in multi-weather image restoration tasks.
Read full abstract