The remarkable increase in published medical imaging datasets for chest X-rays has significantly improved the performance of deep learning techniques to classify lung diseases efficiently. However, large datasets require special arrangements to make them suitable, accessible, and practically usable in remote clinics and emergency rooms. Additionally, it increases the computational time and image-processing complexity. This study investigates the efficiency of converting the 2D chest X-ray into one-dimensional texture representation data using descriptive statistics and local binary patterns, enabling the use of feed-forward neural networks to efficiently classify lung diseases within a short time and with cost effectiveness. This method bridges diagnostic gaps in healthcare services and improves patient outcomes in remote hospitals and emergency rooms. It also could reinforce the crucial role of technology in advancing healthcare. Utilizing the Guangzhou and PA datasets, our one-dimensional texture representation achieved 99% accuracy with a training time of 10.85 s and 0.19 s for testing. In the PA dataset, it achieved 96% accuracy with a training time of 38.14 s and a testing time of 0.17 s, outperforming EfficientNet, EfficientNet-V2-Small, and MobileNet-V3-Small. Therefore, this study suggests that the dimensional texture representation is fast and effective for lung disease classification.
Read full abstract