Abstract
Abstract Metal additive manufacturing (AM) processes entail complex physical phenomena, leading to intricate process-structure-property (PSP) relationships. As PSP relationships are highly linked with thermal fields in metal AM, the appropriate selection of process parameters and their real-time adjustment can be highly beneficial. The infrared images obtained from the LPBF process contain crucial information, such as the size and shape of the melt pool. These details provide valuable insights into the thermal history, structure, and properties of the printed part. Consequently, establishing a diverse set of thermal images is of utmost importance to comprehensively explore the PSP relationships. To unravel the underlying mechanisms in AM and identify real-time optimal parameters, a wide variety of experimental approaches as well as physics-based models have been investigated. However, conducting experimental studies or using physics-based models to attain accurate thermal history data are highly costly and time-consuming. Generative adversarial networks (GAN), known for their capability to generate synthetic images, present a promising solution to the challenge of data availability. The current study proposes a conditional GAN capable of visualizing the metal AM temporal thermal data with process domain knowledge fused into the image generation procedure. Targeted temporal thermal field images of a laser powder bed fusion process are generated by considering laser power, scan speed, and laser spot size process parameters as the fused domain knowledge. The goal is to capture the relationship between the process parameters and thermal images and eventually generate new images with various unseen combinations of process parameters sampled from the design space. A customized data loader is designed to merge experimental thermal images with the corresponding domain knowledge. The architecture of the generator is intricately designed to combine a set of process parameters, i.e., the laser power, scan speed, laser spot size, and time step, with a vectorized noise drawn from a latent space to produce a synthetic image. Moreover, the discriminator scrutinizes a collection of both experimental and synthetic images, along with their associated domain knowledge, to discern their authenticity. The hyperparameters of both generator and discriminator networks are tuned and the methods used for stabilizing the training process are discussed. Finally, the performance of the model is evaluated, and future research directions are highlighted.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.