Abstract

Deep neural network (DNN) exhibits state-of-the-art performance in many fields including microstructure recognition where big dataset is used in training. However, DNN trained by conventional methods with small datasets commonly shows worse performance than traditional machine learning methods, e.g. shallow neural network and support vector machine. This inherent limitation prevented the wide adoption of DNN in material study because collecting and assembling big dataset in material science is a challenge. In this study, we attempted to predict solidification defects by DNN regression with a small dataset that contains 487 data points. It is found that a pre-trained and fine-tuned DNN shows better generalization performance over shallow neural network, support vector machine, and DNN trained by conventional methods. The trained DNN transforms scattered experimental data points into a map of high accuracy in high-dimensional chemistry and processing parameters space. Though DNN with big datasets is the optimal solution, DNN with small datasets and pre-training can be a reasonable choice when big datasets are unavailable in material study.

Highlights

  • To improve generalization and reduce the possibility of overfitting, regularized performance function msereg have been used in training shallow neural network (SNN) after random initialization (SNN are initialized with random numbers) and in fine-tuning deep neural network (DNN) after pre-training (i.e. DNN are initialized with optimized values)

  • Pre-training initializes DNN with optimized weights and biases values that are close to the global optimal solution and it helps the follow-on fine-tuning step to bypass the traps of the local optimal solution

  • Connected DNN which consists of 3 or more hidden layers shows its advantage over shallow neural network and support vector machine in that it can achieve higher prediction accuracy and better generalization performance

Read more

Summary

Deep neural network

Material science, e.g. predicting phase transformation behaviors [1,2,3] and material properties [4,5,6,7,8,9,10,11,12,13,14,15], optimizing processing parameters [16,17] etc. Since neural network was proposed, researchers in the artificial intelligence (AI) field have been exploring the possibility to train deep neural network (DNN) with many hidden layers like the human neural system [21,22]. The success of the previous generation neural network is limited to SNN of 1 or 2 hidden layers, because training DNN is not easy and the resultant accuracy is usually worse than that of SNN [17]. The proposal of restricted Boltzmann machine (RBM) and deep belief network (DBN) by Hinton in 2006 made it possible for the first time to train DNN which contains 3 or more hidden layers through a so-called greedy layer-wised pre-training [23,25]. Deep learning has become the main stream in machine learning (ML)

Challenges of studying material defects and assembling big dataset
Potential for applying DNN with small dataset
Training algorithm and Bayesian regularization
DNN pre-training
Workflow
Dataset assembling
Dataset pre-processing
Dataset division
Validation
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.