Turbulence-degraded image frames are distorted by both turbulent deformations and space–time varying blurs. Restoration of the atmospheric turbulence-degraded image is of great importance in the state of affairs, such as remoting sensing, surveillance, traffic control, and astronomy. While traditional supervised learning uses lots of simulated distorted images for training, it has poor generalization ability for real degraded images. To address this problem, a novel blind restoration network that only inputs a single turbulence-degraded image is presented, which is mainly used to reconstruct the real atmospheric turbulence distorted images. In addition, the proposed method does not require pre-training, and only needs to input a single real turbulent degradation image to output a high-quality result. Meanwhile, to improve the self-supervised restoration effect, Regularization by Denoising (RED) is introduced to the network, and the final output is obtained by averaging the prediction of multiple iterations in the trained model. Experiments are carried out with real-world turbulence-degraded data by implementing the proposed method and four reported methods, and we use four non-reference indicators for evaluation, among which Average Gradient, NIQE, and BRISQUE have achieved state-of-the-art effects compared with other methods. As a result, our method is effective in alleviating distortions and blur, restoring image details, and enhancing visual quality. Furthermore, the proposed approach has a certain degree of generalization, and has an excellent restoration effect for motion-blurred images.