Abstract

Scanning electron microscopy (SEM) is one of the most common approaches to the characterization of synthesized 2D materials such as graphene. Images from SEM contain detailed information about crystalline properties, domain size, and nucleation density, but typically are analyzed through a laborious, serial process that relies on the trained eye of synthesis experts. In this work, we demonstrate an image segmentation neural network that automatically distinguishes between pixels in SEM images that correspond to regions where graphene is and is not present. We utilize the U-Net architecture to learn on a training data set of more than 90 pre-labeled images coupled with moderate image augmentation. Comparing the performance of models trained on smaller high fidelity data set to those trained on larger low fidelity data sets, we find that higher quality is more valuable than higher quantity for achieving good performance. When neural network hyperparameters such as batch size and learning rate are properly tuned, the learned model shows an accuracy for classification of over 90% and an F1 score over 80%. The neural network trained on SEM images of graphene shows reasonable performance when directly applied to other 2D materials, suggesting the possibility of use in transfer learning. Detailed analysis of the inner workings of the model reveals that the domain edges are most critical for making classifications when segmenting the image. We also show the use of a post processing technique to estimate the graphene domain size using segmented masks. This demonstration shows the potential for SEM image segmentation at scale using deep learning approaches and gives insights into best practices for improving model performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call