Abstract

Automatic chest anatomy segmentation plays a key role in computer-aided disease diagnosis, such as for cardiomegaly, pleural effusion, emphysema, and pneumothorax. Among these diseases, cardiomegaly is considered a perilous disease, involving a high risk of sudden cardiac death. It can be diagnosed early by an expert medical practitioner using a chest X-Ray (CXR) analysis. The cardiothoracic ratio (CTR) and transverse cardiac diameter (TCD) are the clinical criteria used to estimate the heart size for diagnosing cardiomegaly. Manual estimation of CTR and other diseases is a time-consuming process and requires significant work by the medical expert. Cardiomegaly and related diseases can be automatically estimated by accurate anatomical semantic segmentation of CXRs using artificial intelligence. Automatic segmentation of the lungs and heart from the CXRs is considered an intensive task owing to inferior quality images and intensity variations using nonideal imaging conditions. Although there are a few deep learning-based techniques for chest anatomy segmentation, most of them only consider single class lung segmentation with deep complex architectures that require a lot of trainable parameters. To address these issues, this study presents two multiclass residual mesh-based CXR segmentation networks, X-RayNet-1 and X-RayNet-2, which are specifically designed to provide fine segmentation performance with a few trainable parameters compared to conventional deep learning schemes. The proposed methods utilize semantic segmentation to support the diagnostic procedure of related diseases. To evaluate X-RayNet-1 and X-RayNet-2, experiments were performed with a publicly available Japanese Society of Radiological Technology (JSRT) dataset for multiclass segmentation of the lungs, heart, and clavicle bones; two other publicly available datasets, Montgomery County (MC) and Shenzhen X-Ray sets (SC), were evaluated for lung segmentation. The experimental results showed that X-RayNet-1 achieved fine performance for all datasets and X-RayNet-2 achieved competitive performance with a 75% parameter reduction.

Highlights

  • The automatic segmentation of the chest anatomy is important for diagnosing pulmonary diseases, where the radiologist evaluates pulmonary discrepancies, such as nodules, lung deformation, and tissue mass disorders [1]

  • The lung shape features from the chest X-Ray (CXR) can be used to diagnose pleural effusion, which is directly related to tuberculosis and congestive heart failure [3]

  • Can be assessed by the cardiothoracic ratio (CTR), which is measured manually by medical experts using the boundaries of the lungs and heart in CXRs [8]

Read more

Summary

Introduction

The automatic segmentation of the chest anatomy is important for diagnosing pulmonary diseases, where the radiologist evaluates pulmonary discrepancies, such as nodules, lung deformation, and tissue mass disorders [1]. Automatic pulmonary disease detection using computer-aided diagnosis (CAD) is based on the correct segmentation of anatomical structures, such as the lungs, heart, and clavicle bones [2]. Considering semantic segmentation of the CXRs, segmentation of the lungs, heart, and clavicle bones is challenging because of the low-quality images and low pixel variation. Previous studies evaluated these issues with preprocessing or deep networks that involve a lot of trainable parameters, creating a computationally expensive CAD solution [23,24]. This study focuses on the accuracy and computational cost for chest anatomy segmentation (lungs, heart, and clavicle bones) for diagnostic purposes. X-RayNet provides binary masks for the desired class, and the masks are used to compute the number of the pixel and the position to aid the medical diagnosis of various diseases

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.