Computational defogging using machine learning presents significant potential; however, its progress is hindered by the scarcity of large-scale datasets comprising real-world paired images with sufficiently dense fog. To address this limitation, we developed a binocular imaging system and introduced Stereofog-an open-source dataset comprising 10,067 paired clear and foggy images, with a majority captured under dense fog conditions. Utilizing this dataset, we trained a pix2pix image-to-image (I2I) translation model and achieved a complex wavelet structural similarity index (CW-SSIM) exceeding 0.7 and a peak signal-to-noise ratio (PSNR) above 17, specifically under dense fog conditions (characterized by a Laplacian variance, vL < 10). We note that Stereofog contains over 70% of dense-fog images. In contrast, models trained on synthetic data, or real-world images augmented with synthetic fog, exhibited suboptimal performance. Our comprehensive performance analysis highlights the model's limitations, such as issues related to dataset diversity and hallucinations-challenges that are pervasive in machine-learning-based approaches. We also propose several strategies for future improvements. Our findings emphasize the promise of machine-learning techniques in computational defogging across diverse fog conditions. This work contributes to the field by offering a robust, open-source dataset that we anticipate will catalyze advancements in both algorithm development and data acquisition methodologies.
Read full abstract