Abstract

Intense short-wavelength pulses from free-electron lasers and high-harmonic-generation sources enable diffractive imaging of individual nanosized objects with a single x-ray laser shot. The enormous data sets with up to several million diffraction patterns present a severe problem for data analysis because of the high dimensionality of imaging data. Feature recognition and selection is a crucial step to reduce the dimensionality. Usually, custom-made algorithms are developed at a considerable effort to approximate the particular features connected to an individual specimen, but because they face different experimental conditions, these approaches do not generalize well. On the other hand, deep neural networks are the principal instrument for today's revolution in automated image recognition, a development that has not been adapted to its full potential for data analysis in science. We recently published [Langbehn etal., Phys. Rev. Lett. 121, 255301 (2018)PRLTAO0031-900710.1103/PhysRevLett.121.255301] the application of a deep neural network as a feature extractor for wide-angle diffraction images of helium nanodroplets. Here we present the setup, our modifications, and the training process of the deep neural network for diffraction image classification and its systematic bench marking. We find that deep neural networks significantly outperform previous attempts for sorting and classifying complex diffraction patterns and are a significant improvement for the much-needed assistance during postprocessing of large amounts of experimental coherent diffraction imaging data.

Highlights

  • Coherent diffraction imaging (CDI) experiments of single particles in free flight have been proven to be a significant asset in the pursuit of understanding the structural composition of nanoscaled matter [1,2,3,4,5,6]

  • We give a general introduction on the capabilities of neural networks and provide results on the first domain adaption of neural networks for the use case of diffraction images as input data

  • The main additions of this paper are (i) an activation function that incorporates the intrinsic logarithmic intensity scaling of diffraction images, (ii) an evaluation on the impact of different training set sizes on the performance of a trained network, and (iii) the use of the pointwise crosscorrelation function to improve the resistance against very noisy data

Read more

Summary

INTRODUCTION

Coherent diffraction imaging (CDI) experiments of single particles in free flight have been proven to be a significant asset in the pursuit of understanding the structural composition of nanoscaled matter [1,2,3,4,5,6]. On the other hand, have already been applied to a broad range of physics-related problems ranging from predicting topological ground states [30], distinguishing different topological phases of topological band insulators [31], enhancing the signal-to-noise at hadron colliders [32], differentiating between so-called known-physics background and new-physics signals at the Large Hadron Collider [33], and solving the Schrödinger equation [34,35] Their ability to classify images has been utilized in cryoelectron microscopy [36], medical imaging [37], and even for hit finding in serial x-ray crystallography [38]. We give a summary of the principal results and unique propositions of this paper and conclude with an outlook on further modifications as well as future directions

THE DATA
What is a deep neural network
Affine transformations
Activation functions
The forward pass
The backward correction
Training setup
Evaluate the predictions
Evaluating a deep neural network
BASELINE PERFORMANCE OF NEURAL NETWORKS WITH CDI DATA
ADAPTING NEURAL NETWORKS FOR CDI DATA
The logarithmic activation function
Size of the training set
Using two-point cross-correlation maps to be more robust to noise
WHAT THE NEURAL NETWORK SAW
SUMMARY AND OUTLOOK
Pooling
Findings
Batch normalization

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.