Abstract

High-entropy alloys (HEAs) offer a new approach to the design of superior metallic materials, wherein alloys are based on multiple principal elements rather than just one. Deep Neural Networks (DNNs), machine learning tools that are efficiently used for prediction purposes, are transforming fields, from speech recognition to computational medicine. In this study, we extend DNN applications to the field phase prediction of high-entropy alloys. Using the built-in capabilities in TensorFlow and Keras, we train DNNs with different layers and numbers of neurons, achieving a 90% prediction accuracy. The DDN prediction model is examined in detail with different datasets to verify model robustness. Due to the high cost of HEAs and in order to save time, it is important to predict phases in order to design alloy composition. Through this study, we show trained DNNs to be a viable tool for predicting the phases of high-entropy alloys, where 90% phase prediction accuracy was achieved in this work.

Highlights

  • Machine learning (ML), a branch of artificial intelligence (AI), is based on the concept that computers can use data and learn to identify patterns and make predictions

  • We extend Deep Neural Networks (DNNs) applications to the field phase prediction of high-entropy alloys

  • We show trained DNNs to be a viable tool for predicting the phases of highentropy alloys, where 90% phase prediction accuracy was achieved in this work

Read more

Summary

Introduction

Machine learning (ML), a branch of artificial intelligence (AI), is based on the concept that computers (machines) can use data and learn to identify patterns and make predictions. These predictions and pattern-identification capabilities can be done with minimal programming instructions. The high computing power in ML comes from the use of Artificial Neural Networks (ANNs), which are basically computing systems with several interconnected processing elements called neurons that map an array of input variables (features) to one or more outputs (labels). 2. The data are passed to the hidden layer, and the output of each neuron at the hidden layer is calculated as follows: hnet j = H (h j ). Where hj represents the jth neurons in the hidden layer, and hnetj represents the jth neurons in the hidden layer

The output of the output layer is then calculated as follows: nj
Results and Discussion
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.