Abstract

The channelized Hotelling observer (CHO) has become a widely used approach for evaluating medical image quality, acting as a surrogate for human observers in early-stage research on assessment and optimization of imaging devices and algorithms. The CHO is typically used to measure lesion detectability. Its popularity stems from experiments showing that the CHO's detection performance can correlate well with that of human observers. In some cases, CHO performance overestimates human performance; to counteract this effect, an internal-noise model is introduced, which allows the CHO to be tuned to match human-observer performance. Typically, this tuning is achieved using example data obtained from human observers. We argue that this internal-noise tuning step is essentially a model training exercise; therefore, just as in supervised learning, it is essential to test the CHO with an internal-noise model on a set of data that is distinct from that used to tune (train) the model. Furthermore, we argue that, if the CHO is to provide useful insights about new imaging algorithms or devices, the test data should reflect such potential differences from the training data; it is not sufficient simply to use new noise realizations of the same imaging method. Motivated by these considerations, the novelty of this paper is the use of new model selection criteria to evaluate ten established internal-noise models, utilizing four different channel models, in a train-test approach. Though not the focus of the paper, a new internal-noise model is also proposed that outperformed the ten established models in the cases tested. The results, using cardiac perfusion SPECT data, show that the proposed train-test approach is necessary, as judged by the newly proposed model selection criteria, to avoid spurious conclusions. The results also demonstrate that, in some models, the optimal internal-noise parameter is very sensitive to the choice of training data; therefore, these models are prone to overfitting, and will not likely generalize well to new data. In addition, we present an alternative interpretation of the CHO as a penalized linear regression wherein the penalization term is defined by the internal-noise model.

Highlights

  • Image quality evaluation is a critical step in optimization of any medical imaging system or image-processing algorithm (ICRU, 1996, Barrett and Myers, 2004)

  • We have previously proposed an approach for prediction of human-observer detection performance for cardiac single-photon emission computed tomography (SPECT) defects, in which the channelized Hotelling observer (CHO) is replaced by a machine-learning algorithm (Brankov et al, 2003, Brankov et al, 2009), and we have extended this approach to diagnostic tasks other than lesion detection (Gifford et al, 2009, Marin et al, 2010, Marin et al, 2011)

  • There is no need to apply a NO on images reconstructed in the same way as those used in the NO training phase, since human observer performance for this reconstruction method is already available from the human observer study

Read more

Summary

Introduction

Image quality evaluation is a critical step in optimization of any medical imaging system or image-processing algorithm (ICRU, 1996, Barrett and Myers, 2004). The human observer is the principal agent of decision-making. It is widely accepted that the diagnostic performance of the human observer is the ultimate test of medical image quality. As in the test data set used in this manuscript, image quality should be judged by the ability of a human observer to detect perfusion defects within the image. Such an approach has become known as task-based image quality assessment. Psychophysical studies to assess human observer performance are difficult to organize, costly and time-consuming. Numerical observers ( known as model observers)—algorithms capable of predicting human observer performance—have gained popularity as a surrogate approach for image quality assessment

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call