Abstract

Abstract Despite the rapid adoption of deep learning in additive manufacturing (AM), significant quality assurance challenges continue to persist. This is further emphasized by the limited availability of samples for complex AM fabricated builds. Thus, this study advances an emerging diffusion generative model, i.e., the denoising diffusion implicit model (DDIM), for layer-wise image augmentation and monitoring in AM. The proposed models integrate two proposed kernel-based distance metrics into the DDIM for effective layer-wise AM image augmentation. These newly proposed metrics include a modified kernel inception distance (m-KID), as well as an integration of m-KID and inception score (IS), termed KID-IS. These novel integrations demonstrate great potential for maintaining both similarity and consistency in AM layer-wise image augmentation, while simultaneously exploring possible unobserved process variations. In the case study, six different cases based on both metal-based and polymer-based fused filament fabrication (FFF) were examined. The results indicated that both the proposed DDIM/m-KID and DDIM/KID-IS models outperform the four benchmark methods, including the popular denoising diffusion probabilistic models (DDPM), and three other generative adversarial networks (GAN) models. Across all the cases, DDIM/KID-IS emerges as the best-performing model. Furthermore, a reality detection test via a convolutional autoencoder (CAE) model with predefined thresholds also affirms the fidelity of the generated images to the real images. Furthermore, the proposed models demonstrated their capabilities in generating potential AM layer-wise variations, which are not observed in real images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call