Abstract

Lung opacities are extremely important for physicians to monitor and can have irreversible consequences for patients if misdiagnosed or confused with other findings. Therefore, long-term monitoring of the regions of lung opacity is recommended by physicians. Tracking the regional dimensions of images and classifying differences from other lung cases can provide significant ease to physicians. Deep learning methods can be easily used for the detection, classification, and segmentation of lung opacity. In this study, a three-channel fusion CNN model is applied to effectively detect lung opacity on a balanced dataset compiled from public datasets. The MobileNetV2 architecture is used in the first channel, the InceptionV3 model in the second channel, and the VGG19 architecture in the third channel. The ResNet architecture is used for feature transfer from the previous layer to the current layer. In addition to being easy to implement, the proposed approach can also provide significant cost and time advantages to physicians. Our accuracy values for two, three, four, and five classes on the newly compiled dataset for lung opacity classifications are found to be 92.52%, 92.44%, 87.12%, and 91.71%, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.