Producing high-quality, noise-free images from noisy or hazy inputs relies on essential tasks such as single image deraining and dehazing. In many advanced multi-stage networks, there is often an imbalance in contextual information, leading to increased complexity. To address these challenges, we propose a simplified method inspired by a U-Net structure, resulting in the “Single-Stage V-Shaped Network” (S2VSNet), capable of handling both deraining and dehazing tasks. A key innovation in our approach is the introduction of a Feature Fusion Module (FFM), which facilitates the sharing of information across multiple scales and hierarchical layers within the encoder-decoder structure. As the network progresses towards deeper layers, the FFM gradually integrates insights from higher levels, ensuring that spatial details are preserved while contextual feature maps are balanced. This integration enhances the image processing capability, producing noise-free, high-quality outputs. To maintain efficiency and reduce system complexity, we replaced or removed several non-essential non-linear activation functions, opting instead for simple multiplication operations. Additionally, we introduced a “Multi-Head Attention Integrated Module” (MHAIM) as an intermediary layer between encoder-decoder levels. This module addresses the limited receptive fields of traditional Convolutional Neural Networks (CNNs), allowing for the capture of more comprehensive feature-map information. Our focus on deraining and dehazing led to extensive experiments on a wide range of synthetic and real-world datasets. To further validate the robustness of our network, we implemented S2VSNet on a low-end edge device, achieving deraining in 2.46 seconds.
Read full abstract