Abstract
The propagation of light through turbulent media results in refractive index fluctuations, causing distortion and blurring of images captured by imaging equipment. Consequently, it is of paramount importance to mitigate turbulence effects. Deep convolutional neural networks incorporating attention mechanisms have demonstrated remarkable success in dynamic video restoration. However, in most networks, the attention mechanism is limited to capturing simple contextual features, failing to adequately exploit multi-level image features. This paper introduces IDSSI, an effective model that utilizes semantic and spatio–temporal information for turbulence mitigation. A novel two-branch feature extraction structure is proposed, capable of extracting multi-scale enhanced features with semantic information under limited supervision. These perceived semantic features are subsequently fused into global features, enhancing multi-scale perception for turbulence repair. Furthermore, a new Spatial–Temporal feature learning strategy is proposed. This approach facilitates the extraction and modulation of temporal information by obtaining edge cues, effectively concatenating and merging with spatial features. This strategy serves as an efficient alternative to 3D convolution. Experimental results on relevant datasets demonstrate the superiority of the proposed model over current state-of-the-art turbulence repair methods.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.