In the rapidly evolving domain of image generation, the availability of sufficient data is crucial for effective model training. However, obtaining a large dataset is often challenging. Medical imaging, industrial monitoring, and self-driving cars are among the applications that require high-fidelity image generation from limited or single data points. The paper proposes a novel approach for increasing the diversity of images generated from a single input image by combining a Denoising Diffusion Probabilistic Model (DDPM) with the ConvNeXt-V2 architecture. This technique addresses the issue of limited data availability by utilizing single images using the BSD and Places365 datasets, significantly increasing the ability of the model through different conditions. The research greatly enhances the image quality by including Global Response Normalization (GRN) and Sigmoid-Weighted Linear Units (SiLU) in the DDPM. In-depth analyses and comparisons with the existing State-of-the-art (SOTA) models highlight the model’s effectiveness, which shows higher experimental results. Achievements include a Pixel Diversity score of 0.87±0.1, an LPIPS Diversity score of 0.42±0.03, and a SIFID for Patch Distribution of 0.046±0.02, along with notable NIQE and RECO scores. These findings indicate the exceptional ability of the model to generate a wide range of high-quality images, exhibiting significant advancement over existing State-of-the-art models in the field of image generation.