Abstract
Low-dose computed tomography (LDCT) has yet to get its full potential benefit due to excessive quantum noise. Although learning to restore an image using two noisy images in the Noise2Noise (N2N) model has shown good promise for different noise models, it does not perform well in LDCT. In this article, we have introduced a collaborative technique to train multiple N2N generators simultaneously and learn the image representation from LDCT images. We have presented three models using this collaborative N2N (CN) principle employing CN two generators (CN2G), CN three generators (CN3G), and hybrid CN3G (HCN3G). The CN3G model has shown better performance than the CN2G model in terms of denoised image quality at the expense of an additional LDCT image. The HCN3G model has taken the advantages of both these models by managing to train three collaborative generators using only two LDCT images by leveraging one previous work called blind source separation (BSS) with block matching 3-D (BM3D). To make the collaboration among different generators more efficient, we have introduced collaborative loss terms among the generators. All three methods have shown improved performance in terms of peak signal-to-noise ratio and structural similarity index metrics compared to similar benchmark methods.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have