Abstract

Marker-less head motion correction methods have been well-studied; however, no reports discussing potential issues in positional calibration between a PET system and an external sensor remain limited. In this study, we develop a method for positional calibration between the PET system and an external range sensor to achieve practical head motion correction. The basic concept of the developed method involves using the subject's face model as a marker not only for head motion detection but also for the system positional calibration. The face model of the subject, which can be obtained easily using the range sensor, can also be calculated from a computed tomography (CT) image of the same subject. The CT image, which is acquired separately for attenuation correction in PET, has the same coordinates as the PET image because of the appropriate matching algorithm between CT and PET images. The proposed method was implemented in the helmet-type PET and the motion correction accuracy was assessed quantitatively using a mannequin head. The phantom experiments demonstrated the performance of the developed motion correction method; high-resolution images with no trace of the applied motion were obtained as if no motion was provided. Statistical analysis supported the visual assessment results in terms of the spatial resolution, contrast recovery; uniformity, and the results implied that motion with correction slightly improved image quality compared with the motionless case. The tolerance of the developed method against potential tracking errors had a minimum 10% difference in the amplitude of the rotation angle.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call