The acquisition of large amounts of high-dimensional data is becoming prevalent in additive manufacturing, arising from sensor integration in such processes. Although supervised machine learning has gained popularity to predict process quality from such data, annotation can be time-consuming and labor intensive. Active learning is a sub-field within machine learning concerned with maximizing model performance using the least amount of annotated data. The focus of this study is twofold. Firstly, a novel active learning method is introduced, called adaptive weighted uncertainty sampling (AWUS) which balances uncertainty with random sampling using the model change between active learning iterations. This way, AWUS is able to adapt from exploration of the instance space to exploitation of model knowledge throughout the active learning process. Secondly, a novel feature extraction and classification method is proposed for directed-energy-deposition additive manufacturing processes which predicts image and process quality based upon the visibility of the melt-pool using the visual signature of occluding features in the image. AWUS is compared against 6 state-of-the-art query strategies on 28 datasets from the OpenML database and 8 in-situ machine vision recordings of directed-energy-deposition processes using 4 different classifiers. The results show that AWUS outperforms the state-of-the-art across the board of the datasets, classifiers and active learning batch sizes. Furthermore, the application of AWUS reduces the amount of necessary annotations by 20–70% in 90% of our experiments compared to random sampling.
Read full abstract