Generative models, particularly diffusion-based approaches, have gained significant attention recently due to their ability to create realistic outputs. Despite their potential, the application of these models in manufacturing remains largely unexplored. This work presents a framework that addresses this gap by generating machined surface images guided by multiple sensor inputs in manufacturing. The proposed model integrates information from multiple sensors with varying sampling rates using multimodal embedding and employs a latent diffusion model to translate the fused sensor embedding into an image embedding, which is then converted into a machined surface image. The effectiveness of the framework is validated using real-world time-series data, including force, torque, acceleration, and sound, collected from various industrial processes, such as a carbon-fiber-reinforced plastic drilling process. The results demonstrate the model’s ability to predict defects from the generated machined surface images. The proposed approach can potentially revolutionize prognostics and health management (PHM) in smart manufacturing by enabling sensor-guided visual inspection, defect detection, process monitoring, and predictive maintenance.
Read full abstract