Abstract

In this work, we investigated the potential for improving image quality and quantitative accuracy of dynamic Altropane SPECT scans. We evaluated fully-4D reconstruction techniques as well as conventional frame-by-frame reconstruction approaches and their ability to provide consistent striatal binding ratios for both normal and patients with attention deficit hyperactivity disorder (ADHD). We also used the Zubal brain phantom to compare the different reconstruction strategies. A 3-headed Picker-3000 SPECT system fitted with low-energy ultrahigh resolution fan-beam collimators acquired 10 frames of dynamic SPECT data in 4-min acquisitions over a period of 40 minutes. Each set of 128 /spl times/ 128 projections were acquired at 120 angles over 360/spl deg/. The dynamic data sequence was using two conventional frame-by-frame reconstruction methods namely, filtered back-projection (FBP) with multiplicative Chang attenuation correction (AC) and re-scaled block-iterative expectation-maximization (RBI-EM) with uniform AC. We also considered two fully-4D reconstructions methods, the KL-EM that exploits the Karbunen-Loeve transform (or principal component analysis) and a dynamic expectation maximization algorithm or DSPECT that incorporates dynamic inequality constraints in the reconstructed object voxel activity over time. Both 4D methods were implemented with uniform AC. From the results, it is clear that both the 4D methods are more robust in handling the low-count dynamic data. However, derived parameters such as the binding ratio might be more insensitive to noise, although in the patient study FBP binding ratios differed greatly from that of the other reconstruction strategies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.