Abstract

Deep neural networks (DNNs) are reshaping many fields due to their end-to-end ability to learn directly from data, leading to outstanding performance, especially in image processing. However, their training requires large image datasets while image preparation can be expensive/time-consuming. If the underlying behavior can be formulated as spatial-dependent, we propose that every pixel from every image can be considered as a different data source, thus enabling a truly large pixel-based dataset with millions of elements (‘position-dependent input’), even when obtained from a few images. The approach is applied to helium focused ion beam nanofabrication, where the cross-section of the helium-damaged region essentially resembles that of a lightbulb, while close inspection reveals the presence of (i) an outer defective region in direct contact with the bulk substrate and (ii) an inner amorphous region filled with helium bubbles of gradually increasing size. Interestingly, the amorphous phase may swell upwards to form a protruding mesa, depending on the beam energy and dose, a feature that has resisted modeling so far. Through dedicated experiments on both Si and SiC substrates, and careful image analysis (segmentation), we describe the transformations that take place in the defective and amorphous regions with increasing energy and dose, and use the outlined position-dependent input together with a simple DNN of 4 hidden layers and 16 neurons per layer to describe all damage features realistically, inherently demonstrating generative behavior. This is the first time that any model predicts swelling satisfactorily. Generalization is surprisingly smooth and accurate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.