Abstract

Atom segmentation and localization, noise reduction and deblurring of atomic-resolution scanning transmission electron microscopy (STEM) images with high precision and robustness is a challenging task. Although several conventional algorithms, such has thresholding, edge detection and clustering, can achieve reasonable performance in some predefined sceneries, they tend to fail when interferences from the background are strong and unpredictable. Particularly, for atomic-resolution STEM images, so far there is no well-established algorithm that is robust enough to segment or detect all atomic columns when there is large thickness variation in a recorded image. Herein, we report the development of a training library and a deep learning method that can perform robust and precise atom segmentation, localization, denoising, and super-resolution processing of experimental images. Despite using simulated images as training datasets, the deep-learning model can self-adapt to experimental STEM images and shows outstanding performance in atom detection and localization in challenging contrast conditions and the precision consistently outperforms the state-of-the-art two-dimensional Gaussian fit method. Taking a step further, we have deployed our deep-learning models to a desktop app with a graphical user interface and the app is free and open-source. We have also built a TEM ImageNet project website for easy browsing and downloading of the training data.

Highlights

  • Atom segmentation and localization, noise reduction and deblurring of atomic-resolution scanning transmission electron microscopy (STEM) images with high precision and robustness is a challenging task

  • In non-ideal ADF-STEM images that are contaminated by noise and distortions, the atomic arrangement might still be recognizable by experienced electron microscopists, but some low-contrast atomic details might not be detectable by inexperienced operators

  • Based on our well-labeled TEM ImageNet library, we show that our encoder–decoder-type deep learning models achieve superior performance in atomic column localization, segmentation, noise reduction and deblur/super-resolution processing of experimental ADF-STEM images of crystal structures that were not included in the training library

Read more

Summary

Introduction

Noise reduction and deblurring of atomic-resolution scanning transmission electron microscopy (STEM) images with high precision and robustness is a challenging task. We report the development of a training library and a deep learning method that can perform robust and precise atom segmentation, localization, denoising, and super-resolution processing of experimental images. It is highly desirable to develop a robust method to detect and localize atoms/atomic columns and restore the atomic-scale information in non-ideal ADF-STEM images Such methods, if available, can greatly reduce misinterpretation, bias, and human errors. We report the development of a training library and a deep learning method that can perform robust and precise atom segmentation, localization, denoising, and deblurring/super-resolution processing of experimental images. Based on our well-labeled TEM ImageNet library, we show that our encoder–decoder-type deep learning models achieve superior performance in atomic column localization, segmentation, noise reduction and deblur/super-resolution processing of experimental ADF-STEM images of crystal structures that were not included in the training library. The training data sets and labels are available for download, searching and browsing at the project w­ ebsite

Methods
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.