Multiplexed positron emission tomography (mPET) imaging allows simultaneous observation of physiological and pathological information from multiple tracers in a single PET scan. Although supervised deep learning has demonstrated superior performance in mPET image separation compared to purely model-based methods, acquiring large amounts of paired single-tracer data and multi-tracer data for training poses a practical challenge and needs extended scan durations for patients. In addition, the generalisation ability of the supervised learning framework is a concern, as the patient being scanned and their tracer kinetics may potentially fall outside the training distribution. In this work, we propose a self-supervised learning framework based on the deep image prior (DIP) for mPET image separation using just one dataset. In particular, we integrate the multi-tracer compartmental model into the DIP framework to estimate the parametric maps of each tracer from the measured dynamic dual-tracer activity images. Consequently, the separated dynamic single tracer activity images can be recovered from the estimated tracer-specific parametric maps. In the proposed method, dynamic dual-tracer activity images are used as the training label, and the static dual-tracer image (reconstructed from the same patient data from the start to the end of acquisition) is used as the network input. The performance of the proposed method was evaluated on a simulated brain phantom for dynamic dual-tracer [18F]FDG+[11C]MET activity image separation and parametric map estimation. The results demonstrate that the proposed method outperforms the conventional voxel-wise multi-tracer compartmental modeling method (vMTCM) and the two step method DIP-Dn+vMTCM (where dynamic dual-tracer activity images are first denoised using a U-net within the DIP framework, followed by vMTCM separation) in terms of lower bias and standard deviation in the separated single-tracer images and also for the estimated parametric maps for each tracer, at both voxel and ROI levels.
Read full abstract