Abstract

In the process of light propagation, due to the interference of turbulent media fluctuations, the refractive index changes, which leads to the distortion and blurring of the image transmitted to the imaging equipment, so it is very important to work to mitigate the turbulence phenomenon. Deep convolutional neural networks using attention mechanisms have achieved remarkable success in dynamic video restoration. In most networks, the attention mechanism can only capture simple contextual features and does not adequately mine the multi-level features in the image. This paper presents IDSSI, an effective model that uses semantice information and spatial–temporal information to achieve deburbulence. First, we introduce a simple two-branch feature extraction structure, which can extract multi-scale enhanced features with semantic information under the condition of limited supervision, and fuse these perceived semantic features into the global features to enhance multi-scale perception for turbulence repair. In addition, we propose a new Spatial-Temporal features learning strategy, which helps to extract and modulate time information by obtaining edge cues, effectively splices and merges with spatial features, and effectively replaces 3D convolution. Experiments on relevant datasets show that the model is superior to the current state-of-the-art turbulence repair methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call