Abstract

Objective. Deep learning-based methods have been widely used in medical imaging field such as detection, segmentation and image restoration. For supervised learning methods in CT image restoration, different loss functions will lead to different image qualities which may affect clinical diagnosis. In this paper, to compare commonly used loss functions and give a better alternative, we studied a widely generalizable framework for loss functions which are defined in the feature space extracted by neural networks. Approach. For the purpose of incorporating prior knowledge, a CT image feature space (CTIS) loss was proposed, which learned the feature space from high quality CT images by an autoencoder. In the absence of high-quality CT images, an alternate loss function, random-weight (RaW) loss in the feature space of images (LoFS) was proposed. For RaW-LoFS, the feature space is defined by neural networks with random weights. Main results. In experimental studies, we used post reconstruction deep learning-based methods in the 2016 AAPM low dose CT grand challenge. Compared with perceptual loss that is widely used, our loss functions performed better both quantitatively and qualitatively. In addition, three senior radiologists were invited for subjective assessments between CTIS loss and RaW-LoFS. According to their judgements, the results of CTIS loss achieved better visual quality. Furtherly, by analyzing each channel of CTIS loss, we also proposed partially constrained CTIS loss. Significance. Our loss functions achieved favorable image quality. This framework can be easily adapted to other tasks and fields.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call