Abstract
Since its introduction in 2015, the U-Net architecture used in Deep Learning has played a crucial role in medical imaging. Recognized for its ability to accurately discriminate small structures, the U-Net has received more than 2600 citations in academic literature, which motivated continuous enhancements to its architecture. In hospitals, chest radiography is the primary diagnostic method for pulmonary disorders, however, accurate lung segmentation in chest X-ray images remains a challenging task, primarily due to the significant variations in lung shapes and the presence of intense opacities caused by various diseases. This article introduces a new approach for the segmentation of lung X-ray images. Traditional max-pooling operations, commonly employed in conventional U-Net++ models, were replaced with the discrete wavelet transform (DWT), offering a more accurate down-sampling technique that potentially captures detailed features of lung structures. Additionally, we used attention gate (AG) mechanisms that enable the model to focus on specific regions in the input image, which improves the accuracy of the segmentation process. When compared with current techniques like Atrous Convolutions, Improved FCN, Improved SegNet, U-Net, and U-Net++, our method (U-Net++-DWT) showed remarkable efficacy, particularly on the Japanese Society of Radiological Technology dataset, achieving an accuracy of 99.1%, specificity of 98.9%, sensitivity of 97.8%, Dice Coefficient of 97.2%, and Jaccard Index of 96.3%. Its performance on the Montgomery County dataset further demonstrated its consistent effectiveness. Moreover, when applied to additional datasets of Chest X-ray Masks and Labels and COVID-19, our method maintained high performance levels, achieving up to 99.3% accuracy, thereby underscoring its adaptability and potential for broad applications in medical imaging diagnostics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.