From an analysis of the priors used in state-of-the-art algorithms for single image defogging, a new prior is proposed to obtain a better atmospheric veil removal. Our hypothesis is based on a physical model, considering that the fog appears denser near the horizon rather than close to the camera. It leads to more restoration when the fog depth is more important, for a more natural rendering. For this purpose, the Naka–Rushton function is used to modulate the atmospheric veil according to empirical observations on synthetic foggy images. The parameters of this function are set from features of the input image. This method also prevents over-restoration and thus preserves the sky from artifacts and noises. The algorithm generalizes to different kinds of fog, airborne particles, and illumination conditions. The proposed method is extended to the nighttime and underwater images by computing the atmospheric veil on each color channel. Qualitative and quantitative evaluations show the benefit of the proposed algorithm. The quantitative evaluation shows the efficiency of the algorithm on four databases with different types of fog, which demonstrates the broad generalization allowed by the proposed algorithm, in contrast with most of the currently available deep learning techniques.