Generative models, such as diffusion models, have made significant advancements in recent years, enabling the synthesis of high-quality realistic data across various domains. Here, the adaptation and training of a diffusion model on super-resolution microscopy images are explored. It is shown that the generated images resemble experimental images, and that the generation process does not exhibit a large degree of memorization from existing images in the training set. To demonstrate the usefulness of the generative model for data augmentation, the performance of a deep learning-based single-image super-resolution (SISR) method trained using generated high-resolution data is compared against training using experimental images alone, or images generated by mathematical modeling. Using a few experimental images, the reconstruction quality and the spatial resolution of the reconstructed images are improved, showcasing the potential of diffusion model image generation for overcoming the limitations accompanying the collection and annotation of microscopy images. Finally, the pipeline is made publicly available, runnable online, and user-friendly to enable researchers to generate their own synthetic microscopy data. This work demonstrates the potential contribution of generative diffusion models for microscopy tasks and paves the way for their future application in this field.