Abstract

Single-shot imaging with femtosecond X-ray lasers is a powerful measurement technique that can achieve both high spatial and temporal resolution. However, its accuracy has been severely limited by the difficulty of applying conventional noise-reduction processing. This study uses deep learning to validate noise reduction techniques, with autoencoders serving as the learning model. Focusing on the diffraction patterns of nanoparticles, we simulated a large dataset treating the nanoparticles as composed of many independent atoms. Three neural network architectures are investigated: neural network, convolutional neural network and U-net, with U-net showing superior performance in noise reduction and subphoton reproduction. We also extended our models to apply to diffraction patterns of particle shapes different from those in the simulated data. We then applied the U-net model to a coherent diffractive imaging study, wherein a nanoparticle in a microfluidic device is exposed to a single X-ray free-electron laser pulse. After noise reduction, the reconstructed nanoparticle image improved significantly even though the nanoparticle shape was different from the training data, highlighting the importance of transfer learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call