Abstract
Scatter compensation (SC) and attenuation correction (AC) are vital steps towards quantitative PET image analysis. Recently deep learning algorithms were applied in the image domain for SC and AC. To build a generalizable and reproducible deep learning model a large dataset is needed to tune millions of model parameters. Because of the sensitivity of medical images and strict regulations, gathering a large dataset for deep learning model training is the main challenge in medical applications. In this work, non-attenuation corrected, and CT-based attenuation corrected <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">18</sup> F-FDG PET images of 300 patients were enrolled. Data consisted of 50 patients from 6 different centers which scanners, image acquisition, and reconstruction vary across the different centers. We used a deep residual network as the main architecture. The network consists of twenty convolutional layers in which feature extraction (low, medium, and high) was performed by dilation kernel. For model evaluation, voxel-wise mean error (ME), mean absolute error (MAE), relative error (RE%), absolute relative error (ARE%) and structural similarity index (SSIM) were calculated between ground truth CT-based attenuation/scatter corrected and predicted PET images. We implemented server aggregate federated learning workflow, which included 3 steps: (1) central global model distributed through different departments, then (2) models will be trained in each center separately and finally (3) local trained models returned to central server and model aggregated as central global models. Steps 1-3 are repeated until the model is fully trained and converged. Quantitative analysis showed ME of 0.05±0.1, MAE of 0.43±0.01, RE of 2.74±5.7%, ARE of 15.0±8.8% and SSIM of 0. 90±0.09 in the test set. In this study, we built a deep learning-based AC/SC model for PET images using data emanating from 6 different centers without sharing the datasets. Federated learning algorithms provide the opportunity to build a model using multicenter datasets without sharing data.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have