Currently, a key metric for assessing the performance of time-of-flight positron emission tomography (TOF-PET) detectors is the Coincidence Time Resolution (CTR). In this study, our aim is to enhance CTR in PET detectors through features in frequency domain. To achieve this goal, we introduce a novel approach that integrates short-time Fourier transform (STFT) with a residual neural network (ResNet) to extract features from digitized waveforms obtained from PET detectors. A comparative analysis is performed by comparing the proposed method with the conventional constant fraction discriminator (CFD) and a convolutional neural network (CNN) using real data measured from a pair of lutetium yttrium oxyorthosilicate (LYSO)-SiPM units. The proposed method achieves an average CTR of 187 ps, which is 30.4% better than the CFD method and 4.5% better than the CNN. The method also reduces the average bias by 17.9% compared to the CNN method. A Class Activation Mapping (CAM) is also introduced to analyze the importance of different frequency components in Time of Flight (TOF) estimation and the attention of network on each time point, which may not only underscore the utility of STFT in accurate timing for waveforms with different rise time but also provide insights into why CNN may outperform traditional timing methods. The influence of loss function and data organization are also studied. The enhancement of proposed method in CTR of PET will contribute to the improvement of Signal-to-Noise Ratio (SNR) of PET images.
Read full abstract