Abstract
In whole-body dynamic positron emission tomography (PET), inter-frame subject motion causes spatial misalignment and affects parametric imaging. Many of the current deep learning inter-frame motion correction techniques focus solely on the anatomy-based registration problem, neglecting the tracer kinetics that contains functional information. To directly reduce the Patlak fitting error for 18F-FDG and further improve model performance, we propose an interframe motion correction framework with Patlak loss optimization integrated into the neural network (MCP-Net). The MCP-Net consists of a multiple-frame motion estimation block, an image-warping block, and an analytical Patlak block that estimates Patlak fitting using motion-corrected frames and the input function. A novel Patlak loss penalty component utilizing mean squared percentage fitting error is added to the loss function to reinforce the motion correction. The parametric images were generated using standard Patlak analysis following motion correction. Our framework enhanced the spatial alignment in both dynamic frames and parametric images and lowered normalized fitting error when compared to both conventional and deep learning benchmarks. MCP-Net also achieved the lowest motion prediction error and showed the best generalization capability. The potential of enhancing network performance and improving the quantitative accuracy of dynamic PET by directly utilizing tracer kinetics is suggested.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.