Abstract

For more complete measurement of tracer kinetics, dynamic acquisition of temporal images was proposed. The applied dynamic approach consisted of an initial blood pool (cardiac) scan and a number of whole-body passes (13) for estimation of the whole-body K <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</inf> parametric maps for making use of Patlak graphical analysis. Two difficulties of this method that prevent it from being clinically accepted are the Image-Derived Input Function (IDIF) estimation and long acquisition times. To tackle these issues, deep convolutional neural networks (DCCN) were utilized to make DCCN-based model for producing K <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</inf> maps from standardized Uptake Values (SUV) and dynamic passes. The method is needless of using input function while optimizing the acquisition procedure for fewer number of passes for dynamic whole-body PET acquisition. The deep convolutional neural network used here is a high-resolution residual network architecture with 20 convolutional layers. PET/CT images from 19 adult patients who underwent <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">18</sup> F-FDG for staging or restaging of lung or abdominal lesions were used for the training phase to generate reference K <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</inf> -Patlak images. A nine-fold cross-validation scheme was used for training/test of the applied algorithm. Input data were categorized in two groups with SUV and without SUV images. Each time, one pass was added to the input data set starting from pass 13. For evaluating the method, Mean Absolute Error (MAE), Mean Error (ME), Mean Relative Absolute Error (MRAE), Relative Error (RE%), Root Mean Squared Error (RMSE), Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structural SIMilarity index (SSIM) were calculated within the body and voxel-based. Report-based indications of patients were extracted from the images and their visibility in reference and predicted images were evaluated. The RE% for SUV plus passes from 13 to 9 as input data is 7.45±0.94%. From visual standpoint, it could be demonstrated that in cases which K <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</inf> Patlak could detect the lesions which are not visible in SUV images, the lesions would be detectable/visible in predicted images as well. Our results demonstrated that using just 3 last passes as input of the deep learning model would result in acceptable images qualitatively and quantitatively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call