The haze in a scenario may affect the 360 photo/video quality and the immersive 360 ° virtual reality (VR) experience. The recent single image dehazing methods, to date, have been only focused on plane images. In this work, we propose a novel neural network pipeline for single omnidirectional image dehazing. To create the pipeline, we build the first hazy omnidirectional image dataset, which contains both synthetic and real-world samples. Then, we propose a new stripe sensitive convolution (SSConv) to handle the distortion problems due to the equirectangular projections. The SSConv calibrates distortion in two steps: 1) extracting features using different rectangular filters and, 2) learning to select the optimal features by a weighting of the feature stripes (a series of rows in the feature maps). Subsequently, using SSConv, we design an end-to-end network that jointly learns haze removal and depth estimation from a single omnidirectional image. The estimated depth map is leveraged as the intermediate representation and provides global context and geometric information to the dehazing module. Extensive experiments on challenging synthetic and real-world omnidirectional image datasets demonstrate the effectiveness of SSConv, and our network attains superior dehazing performance. The experiments on practical applications also demonstrate that our method can significantly improve the 3-D object detection and 3-D layout performances for hazy omnidirectional images.
Read full abstract