Abstract

Supervised deep learning methods have been successfully applied in medical imaging. However, training deep learning systems often requires ample annotated data. Due to cost and time restrictions, not all collected medical images, e.g., chest x-rays (CXRs), can be labeled in practice. To classify these unlabeled images, a solution may involve adopting a model trained with sufficient labeled data in relevant domains (with both source and target being CXRs). However, domain shift may cause the trained model not able to generalize well on unlabeled target datasets. This work aims to develop a novel unsupervised domain adaptation (UDA) framework to improve recognition performance on unlabeled target data. We present a semantically preserving adversarial UDA network, i.e., SPA-UDA net, with the potential to bridge the domain gap, by reconstructing the images in the target domain via an adversarial encode-and-reconstruct translation architecture. To preserve the class-specific semantic information (i.e., with or without disease) of the original images when translating, a semantically consistent framework is embedded. This framework is designed to guarantee that fine-grained disease-related information on the original images can be safely transferred. Furthermore, the proposed SPA-UDA net does not require paired images from source and target domains when training, which reduces the cost of arranging data significantly and is ideal for UDA. We evaluate the proposed SPA-UDA net on two public CXR datasets for lung disease recognition. The experimental results show that the proposed framework achieves significant performance improvements compared to other state-of-the-art UDA methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.