One way to improve data quality is to impute data. Machine learning, generative models, and statistical methods can all be used to solve missing data problems. Normally distributed data contributes to the interpretability and reliability of model parameters. Conversely, non-normally distributed data can result in biased outcomes, skewed predictions, and inefficient training processes. In this paper, we proposed fundamentally enhancing the widely used imputation procedure using the diffusion model. We suggest creating natural generative (Perlin noise) at every diffusion stage. We also provide scheduler improvement and loss function adaptation with Perlin Noise. Perlin noise generates high slope normal distribution characteristics, which forces non-normally distributed data to become more contaminated by noisy data to resemble Gaussian distribution. When entering the deep learning network in the imputation diffusion model, the Perlin noise distribution makes the noisy data contributed noise more normal. We assess our proposed approach by simulating the missing data rate using three scenarios: Missing Completely at Random (MCAR), Missing Not At Random (MNAR), and Missing at Random (MAR). Every case is handled similarly, with 20% to 80% missing data. Compared to other deep learning imputation methods, our proposed methods and improvements contribute to lowering the RMSE value up to 10% on non-normal distributed data imputation.