Abstract

The existing multilabel X-Ray image learning tasks generally contain much information on pathology co-occurrence and interdependency, which is very important for clinical diagnosis. However, the challenging part of this subject is to accurately diagnose multiple diseases that occurred in a single X-Ray image since multiple levels of features are generated in the images, and create different features as in single label detection. Various works were developed to address this challenge with proposed deep learning architectures to improve classification performance and enrich diagnosis results with multi-probability disease detection. The objective is to create an accurate result and a faster inference system to support a quick diagnosis in the medical system. To contribute to this state-of-the-art, we designed a fusion architecture, CheXNet and Feature Pyramid Network (FPN), to classify and discriminate multiple thoracic diseases from chest X-Rays. This concept enables the model to extract while creating a pyramid of feature maps with different spatial resolutions that capture low-level and high-level semantic information to encounter multiple features. The model's effectiveness is evaluated using the NIH ChestXray14 dataset, with the Area Under Curve (AUC) and accuracy metrics used to compare the results against other cutting-edge approaches. The overall results demonstrate that our method outperforms other approaches and has become promising for multilabel disease classification in chest X-Rays, with potential applications in clinical practice. The result demonstrated that we achieved an average AUC of 0.846 and an accuracy of 0.914. Further, our proposed architecture diagnoses images in 0.013s, faster than the latest approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call