Abstract
In this study, we present a novel approach to enable high-throughput characterization of transition-metal dichalcogenides (TMDs) across various layers, including mono-, bi-, tri-, four, and multilayers, utilizing a generative deep learning-based image-to-image translation method. Graphical features, including contrast, color, shapes, flake sizes, and their distributions, were extracted using color-based segmentation of optical images, and Raman and photoluminescence spectra of chemical vapor deposition-grown and mechanically exfoliated TMDs. The labeled images to identify and characterize TMDs were generated using the pix2pix conditional generative adversarial network (cGAN), trained only on a limited data set. Furthermore, our model demonstrated versatility by successfully characterizing TMD heterostructures, showing adaptability across diverse material compositions.Graphical abstractImpact StatementDeep learning has been used to identify and characterize transition-metal dichalcogenides (TMDs). Although studies leveraging convolutional neural networks have shown promise in analyzing the optical, physical, and electronic properties of TMDs, they need extensive data sets and show limited generalization capabilities with smaller data sets. This work introduces a transformative approach—a generative deep learning (DL)-based image-to-image translation method—for high-throughput TMD characterization. Our method, employing a DL-based pix2pix cGAN network, transcends traditional limitations by offering insights into the graphical features, layer numbers, and distributions of TMDs, even with limited data sets. Notably, we demonstrate the scalability of our model through successful characterization of different heterostructures, showcasing its adaptability across diverse material compositions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.