Machine learning has been found to be ubiquitously useful across many industries, presenting an opportunity to improve radiation detection performance using data-driven algorithms. Improved detector resolution can aid in the detection, identification, and quantification of radionuclides. In this work, a novel, data-driven, unsupervised learning approach is developed to improve detector spectral characteristics by learning, and subsequently rejecting, poorly performing regions of the pixelated detector. Feature engineering is used to fit individual characteristic photo peaks to a Doniach lineshape with a linear background model. Then, principal component analysis is used to learn a lower-dimension latent space representation of each photo peak where the pixels are clustered, and subsequently ranked, based on the cluster mean distance to an optimal point. Pixels within the worst cluster(s) are rejected to improve the full-width at half-maximum (FWHM) by 10% to 15% (relative to the bulk detector) at 50% net efficiency when applied to training data obtained from measurements of a 100 μCi 154Eu source using a H3D M400i pixelated cadmium zinc telluride detector. These results compare well with, but do not outperform, a greedy algorithm that accumulates pixels in order of FWHM from lowest to highest used as a benchmark. In the future, this approach can be extended to include the detector energy and angular response. Finally, the model is applied to newly seen natural and enriched uranium spectra relevant for nuclear safeguards applications.
Read full abstract