Abstract

Deployment of the Internet of Things (IoT) in medical imaging is critical for cross-modality tasks, such as PET image denoising. IoT helps to gather innumerable day-to-day medical big data, which is essential for training robust and well-optimized deep learning models. In this work, we propose a modified cycle-consistent generative adversarial network (CycleGAN) as a cloud-based continuous learning approach for synthesizing full-dose (FD) PET images from low-dose (LD) PET images. In continuous learning first a model trained with a limited number of actual data includes LD and corresponding FD images on a cloud computing platform of IoT-based PET devices. Thereafter, it continually trains with its new predictions those that pass through a criteria/filter. In this study, the criteria for accepting or rejecting predictions are based on a comparison between the pixel-wise correlation of predicted FD and actual LD and the average correlation between reference FD and LD images in the first training. If the bias between the correlations was under 5%, the predicted FD and actual LD enter to training process otherwise it will reject them. The clinical dataset includes 140 brain PET/CT images were employed for LD to FD PET transformation. The main model (M-50) was trained with 50 cases and evaluated on the main test dataset (T-1). Then the model was fed with 14 separate LD images and to generate 10 FD images with the abovementioned criteria. M-50 were again trained with these 10 new datasets (M-60) and tested on T-1. This approach was repeated 4 times and each new model (M-70, M-80, and M-90) was tested on T-1. The results indicated that in each step the bias decreased by 7.6%, 11.96%, 18.48%, and 21.74% relative to the M-50, thus confirming the potential of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call