Head motion presents a continuing problem in brain PET studies. A wealth of motion correction (MC) algorithms had been proposed in the past, including both hardware-based methods and data-driven methods. However, in most real brain PET studies, in the absence of ground truth or gold standard of motion information, it is challenging to objectively evaluate MC quality. For MC evaluation, image-domain metrics, e.g., standardized uptake value (SUV) change before and after MC are commonly used, but this measure lacks objectivity because 1) other factors, e.g., attenuation correction, scatter correction and parameters used in the reconstruction, will confound MC effectiveness; 2) SUV only reflects final image quality, and it cannot precisely inform when an MC method performed well or poorly during the scan time period; 3) SUV is tracer-dependent and head motion may cause increases or decreases in SUV for different tracers, so evaluating MC effectiveness is complicated. Here, we present a new algorithm, i.e., motion corrected centroid-of-distribution (MCCOD) to perform objective quality control for measured or estimated rigid motion information. MCCOD is a three-dimensional surrogate trace of the center of tracer distribution after performing rigid MC using the existing motion information. MCCOD is used to inform whether the motion information is accurate, using the PET raw data only, i.e., without PET image reconstruction, where inaccurate motion information typically leads to abrupt changes in the MCCOD trace. MCCOD was validated using simulation studies and was tested on real studies acquired from both time-of-flight (TOF) and non-TOF scanners. A deep learning-based brain mask segmentation was implemented, which is shown to be necessary for non-TOF MCCOD generation. MCCOD is shown to be effective in detecting abrupt translation motion errors in slowly varying tracer distribution caused by the motion tracking hardware and can be used to compare different motion estimation methods as well as to improve existing motion information.
Read full abstract