Acoustic Analysis of Chronic Obstructive Pulmonary Disorder using Transfer Learning - a Three-Class Problem

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Acoustic Analysis of Chronic Obstructive Pulmonary Disorder using Transfer Learning - a Three-Class Problem

Similar Papers
  • Book Chapter
  • 10.1007/978-981-15-1366-4_33
A Multiclass Classification of Epileptic Activity in Patients Using Wavelet Decomposition
  • Jan 1, 2020
  • Daya Gupta + 3 more

Epilepsy is a chronic disorder caused in the brain where seizures occur multiple times unreliably causing unconsciousness or tremendous convulsions over the entire body. The identification of epileptic seizure activities in electroencephalography (EEG) signals by manual inspection is prone to errors and time-consuming. The proposed study suggests using Discrete Wavelet Transform to decompose the EEG signals into frequency sub-bands. A certain subset of the frequency sub-bands was chosen for feature selection. Following the DWT decomposition, the proposed method calculates the standard deviation for each sub-band present in the subset. Finally, it feeds the standard deviation values of the sub-bands to the classifiers. This work investigated the three-class classification problem focused on classifying an EEG signal into one of the three classes, which are (1) healthy patient with eyes closed, (2) patients in inter-ictal stage whose EEG recordings have been recorded from the hippocampal formation of the opposite hemisphere of the brain, and (3) patients experiencing seizure activities. The accuracy achieved in proposed work is 98.45% which beats the state-of-the-art accuracy in this three-class problem. Additionally, the proposed method achieves the highest accuracy of 100% in classifying normal EEG signals (eyes open and eyes closed) and seizure EEG signal in two separate experiments which is comparable with the existing state of the art EEG signal classification techniques. The proposed work uses six different classifiers in each of the three experiments conducted where every classifier has been used with 8 different Daubechies wavelets db1 to db8. The results obtained from these experiments provide valuable insights establishing that SVM performs the best in most of the experiments with the db4 wavelet among the 8 wavelets achieving the highest accuracy.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/iisa50023.2020.9284370
Development of Convolutional Neural Networkbased models for bone metastasis classification in nuclear medicine
  • Jul 15, 2020
  • Nikolaos I Papandrianos + 5 more

Focusing on prostate cancer patients, this research paper addresses the problem of bone metastasis diagnosis, investigating the capabilities of convolutional neural networks (CNN) and transfer learning. Considering the wide applicability of CNNs in medical image classification, VGG16 and DenseNet, as being two efficient types of deep neural networks, are exploited for images recognition, being used to properly classify an image by extracting its insightful features. The purpose of this study is to explore the capabilities of transfer learning in VGG16 and DenseNet application process, which will be able to classify bone scintigraphy images in patients suffering from prostate cancer. Efficient VGG16 and DenseNet architectures were built based on a CNN exploration process for bone metastasis diagnosis and then were employed to identify the metastasis from the bone scintigraphy image data. The classification task is a three-class problem, which classifies images as normal, malignant, and healthy images with degenerative changes. The results revealed that both methods are sufficiently accurate to differentiate the metastatic bone from degenerative changes as well as from normal tissue.

  • Research Article
  • Cite Count Icon 25
  • 10.3847/1538-4365/abe85e
Finding Quasars behind the Galactic Plane. I. Candidate Selections with Transfer Learning
  • Apr 23, 2021
  • The Astrophysical Journal Supplement Series
  • Yuming Fu + 6 more

Quasars behind the Galactic plane (GPQs) are important astrometric references and useful probes of Milky Way gas. However, the search for GPQs is difficult due to large extinctions and high source densities in the Galactic plane. Existing selection methods for quasars developed using high Galactic latitude (high-b) data cannot be applied to the Galactic plane directly because the photometric data obtained from high-b regions and the Galactic plane follow different probability distributions. To alleviate this data set shift problem for quasar candidate selection, we adopt a transfer-learning framework at both the data and algorithm levels. At the data level, to make a training set in which a data set shift is modeled, we synthesize quasars and galaxies behind the Galactic plane based on SDSS sources and the Galactic dust map. At the algorithm level, to reduce the effect of class imbalance, we transform the three-class classification problem for stars, galaxies, and quasars into two binary classification tasks. We apply the XGBoost algorithm to Pan-STARRS1 (PS1) and AllWISE photometry for classification and an additional cut on Gaia proper motion to remove stellar contaminants. We obtain a reliable GPQ candidate catalog with 160,946 sources located at ∣b∣ ≤ 20° in the PS1-AllWISE footprint. Photometric redshifts of GPQ candidates achieved with the XGBoost regression algorithm show that our selection method can identify quasars in a wide redshift range (0 < z ≲ 5). This study extends the systematic searches for quasars to the dense stellar fields and shows the feasibility of using astronomical knowledge to improve data mining under complex conditions in the big-data era.

  • Research Article
  • Cite Count Icon 65
  • 10.1016/j.media.2018.07.010
Building medical image classifiers with very limited data using segmentation networks.
  • Aug 4, 2018
  • Medical Image Analysis
  • Ken C.L Wong + 2 more

Building medical image classifiers with very limited data using segmentation networks.

  • Research Article
  • Cite Count Icon 1
  • 10.13005/bpj/3079
Deep Learning-Based Feature Extraction and Machine Learning Models for Parkinson's Disease Prediction Using DaTscan Image
  • Jan 20, 2025
  • Biomedical and Pharmacology Journal
  • Janmejay Pant + 5 more

Parkinson's disease (PD) is a chronic, non-fatal, and well-known progressive neurological disorder, the symptoms of which often overlap with other diseases. Effective treatment of diseases also requires accurate and early diagnosis, a way that patients can lead healthy and productive lives. The main PD signs are resting tremors, muscular rigidity, akinesia, postural instability, and non-motor signs. Clinician-filled dynamics have traditionally been an essential approach to monitoring and evaluating Parkinson's Disease (PD) using checklists. Accurate and timely diagnosis of Parkinson's disease (PD), a chronic and progressive neurological ailment, can be difficult due to its symptoms overlapping with those of other disorders. Effective therapy and improvement in the quality of life for patients depend on early and accurate detection. To improve classification performance, this study investigates transfer learning, which uses pre-trained models to extract features from massive datasets. Transfer learning improves generalization and permits domain adaptation, especially for small or resource-constrained datasets, while lowering training time, resource needs, and overfitting concerns. This work aims to design and assess a general transfer learning paradigm for the reliable prognosis of Parkinson’s disease based on DaTscan images that consider feature extraction and the performance of a variety of ML algorithms. This work aims to explore the use of transfer learning with pre-trained deep learning models to extract features from DaTscan images in order to improve classification accuracy. The sample of this study is made up of 594 DaTscan images from 68 participants, 43 with PD and 26 healthy. Out of the four algorithms employed; the Random Forest, Neural Network, Logistic Regression, and Gradient Boosting models, transfer learning-based features were applied. Four indices of accuracy, namely Area Under the Curve (AUC), Classification Accuracy (CA), F1 Score, Precision, Recall and Matthews Correlation Coefficient (MCC) were used to evaluate four machine learning models on a PD classification task such as Random Forest, Neural Network, Logistic Regression, and Gradient Boosting. Neural networks outperformed the other models, showing robustness and reliability with an AUC of 0.996, CA of 0.973, and MCC of 0.946. Gradient Boosting performed competitively, coming in second with an AUC of 0.995 and MCC of 0.925. Random Forest performed the worst, with an AUC of 0.986 and an MCC of 0.905, whereas Logistic Regression had an AUC of 0.991 and an MCC of 0.926. These results demonstrate how well neural networks perform high-precision tasks and point to gradient boosting as a more computationally effective option.

  • Research Article
  • Cite Count Icon 12
  • 10.1016/j.ecoinf.2024.102710
Leveraging transfer learning and active learning for data annotation in passive acoustic monitoring of wildlife
  • Jul 10, 2024
  • Ecological Informatics
  • Hannes Kath + 4 more

Passive Acoustic Monitoring (PAM) has emerged as a pivotal technology for wildlife monitoring, generating vast amounts of acoustic data. However, the successful application of machine learning methods for sound event detection in PAM datasets heavily relies on the availability of annotated data, which can be laborious to acquire. In this study, we investigate the effectiveness of transfer learning and active learning techniques to address the data annotation challenge in PAM. Transfer learning allows us to use pre-trained models from related tasks or datasets to bootstrap the learning process for sound event detection. Furthermore, active learning promises strategic selection of the most informative samples for annotation, effectively reducing the annotation cost and improving model performance. We evaluate an approach that combines transfer learning and active learning to efficiently exploit existing annotated data and optimize the annotation process for PAM datasets. Our transfer learning observations show that embeddings produced by BirdNet, a model trained on high signal-to-noise recordings of bird vocalisations, can be effectively used for predicting anurans in PAM data: a linear classifier constructed using these embeddings outperforms the benchmark by 21.7%. Our results indicate that active learning is superior to random sampling, although no clear winner emerges among the strategies employed. The proposed method holds promise for facilitating broader adoption of machine learning techniques in PAM and advancing our understanding of biodiversity dynamics through acoustic data analysis.

  • Research Article
  • 10.1093/ehjci/jeae333.009
Transfer learning for echocardiographic detection of heart failure with preserved ejection fraction: preliminary results of TALE-HFpEF Study
  • Jan 29, 2025
  • European Heart Journal - Cardiovascular Imaging
  • G Babur Guler + 7 more

Background Heart failure with preserved ejection fraction (HFpEF) is a heterogeneous syndrome with increasing prevalence (1). The diagnosis of HFpEF is a complex one that has not yet reached a consensus in current guidelines, and attempts are being made to diagnose it through various algorithms and scoring systems (2, 3). However, the uncertainties in the diagnostic process and the inherent complexity continue to pose significant barriers to practical implementation. The use of artificial intelligence on single apical 4-chamber transthoracic echocardiograhy video clips for HFpEF detection has shown success (4), but knowledge from readily available models trained for different tasks is not utilized. Purpose This study aims to utilize transfer learning, an artificial intelligence method, to detect HFpEF using echocardiography images. Methods In this preliminary anaylsis, echocardiography video clips were collected from 40 healthy volunteers and 53 HFpEF patients, all over 18 years old. The diagnosis of HFpEF was made in accordance with the current ESC guidelines (3). Apical 4-chamber transthoracic echocardiography images of the patients and volunteers included in the study were obtained and analyzed. Patients with chronic obstructive pulmonary disease, recent myocardial infarction (last 6 months), or recent stroke/cerebrovascular disease (last 3 months) were excluded. Transfer learning was applied using a video ResNet model (6), adapted for left and right ventricle ejection fraction (LVEF and RVEF) prediction tasks, along with a non-medical video classification task (Kinetics 400) (6-8). A 5-fold cross-validation schema was used, and models were compared using balanced accuracy with a right-tailed t-test. Results When comparing with the control group, the HFpEF group shows higher rates of hypertension, diabetes, and atrial fibrillation, as well as higher NT-proBNP levels. The paired one-tailed t-test confirmed significant superiority of all transfer learning models over the baseline model (p < 0.005). The model transferred from the LVEF regression task achieved an AUC of 0.95 ± 0.04 and F1 score of 0.93 ± 0.04 (Figure 1), demonstrating superior performance. Statistical analysis indicated no significant variation in balanced accuracy among models (p > 0.05). Figure 1 also depicts ROC curves of the models initialized with different task weights. Figure-2 illustrates the locations where models focus before and after training using the Grad-CAM method (9). The LVEF model has achieved 92% accuracy in identifying HFpEF patients with 95% sensitivity and 90% specificity. Conclusion The preliminary results of our study are promising in the diagnosis of HFpEF patients through echocardiographic clips with transfer learning. Throughout our study, as the sample size grows, this model could become a key tool in clinical practice for detecting HFpEF patients, potentially enhancing AI's role in diagnosing this challenging patient group.

  • Supplementary Content
  • Cite Count Icon 110
  • 10.1007/s10489-020-01831-z
Detection of COVID-19 using CXR and CT images using Transfer Learning and Haralick features
  • Aug 12, 2020
  • Applied Intelligence (Dordrecht, Netherlands)
  • Varalakshmi Perumal + 2 more

Recognition of COVID-19 is a challenging task which consistently requires taking a gander at clinical images of patients. In this paper, the transfer learning technique has been applied to clinical images of different types of pulmonary diseases, including COVID-19. It is found that COVID-19 is very much similar to pneumonia lung disease. Further findings are made to identify the type of pneumonia similar to COVID-19. Transfer Learning makes it possible for us to find out that viral pneumonia is same as COVID-19. This shows the knowledge gained by model trained for detecting viral pneumonia can be transferred for identifying COVID-19. Transfer Learning shows significant difference in results when compared with the outcome from conventional classifications. It is obvious that we need not create separate model for classifying COVID-19 as done by conventional classifications. This makes the herculean work easier by using existing model for determining COVID-19. Second, it is difficult to detect the abnormal features from images due to the noise impedance from lesions and tissues. For this reason, texture feature extraction is accomplished using Haralick features which focus only on the area of interest to detect COVID-19 using statistical analyses. Hence, there is a need to propose a model to predict the COVID-19 cases at the earliest possible to control the spread of disease. We propose a transfer learning model to quicken the prediction process and assist the medical professionals. The proposed model outperforms the other existing models. This makes the time-consuming process easier and faster for radiologists and this reduces the spread of virus and save lives.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 8
  • 10.3389/fbioe.2022.806761
Data Augmentation and Transfer Learning for Data Quality Assessment in Respiratory Monitoring.
  • Feb 14, 2022
  • Frontiers in Bioengineering and Biotechnology
  • Andrea Rozo + 12 more

Changes in respiratory rate have been found to be one of the early signs of health deterioration in patients. In remote environments where diagnostic tools and medical attention are scarce, such as deep space exploration, the monitoring of the respiratory signal becomes crucial to timely detect life-threatening conditions. Nowadays, this signal can be measured using wearable technology; however, the use of such technology is often hampered by the low quality of the recordings, which leads more often to wrong diagnosis and conclusions. Therefore, to apply these data in diagnosis analysis, it is important to determine which parts of the signal are of sufficient quality. In this context, this study aims to evaluate the performance of a signal quality assessment framework, where two machine learning algorithms (support vector machine–SVM, and convolutional neural network–CNN) were used. The models were pre-trained using data of patients suffering from chronic obstructive pulmonary disease. The generalization capability of the models was evaluated by testing them on data from a different patient population, presenting normal and pathological breathing. The new patients underwent bariatric surgery and performed a controlled breathing protocol, displaying six different breathing patterns. Data augmentation (DA) and transfer learning (TL) were used to increase the size of the training set and to optimize the models for the new dataset. The effect of the different breathing patterns on the performance of the classifiers was also studied. The SVM did not improve when using DA, however, when using TL, the performance improved significantly (p < 0.05) compared to DA. The opposite effect was observed for CNN, where the biggest improvement was obtained using DA, while TL did not show a significant change. The models presented a low performance for shallow, slow and fast breathing patterns. These results suggest that it is possible to classify respiratory signals obtained with wearable technologies using pre-trained machine learning models. This will allow focusing on the relevant data and avoid misleading conclusions because of the noise, when designing bio-monitoring systems.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/icecaa55415.2022.9936570
Non-Uniform Filter Bank Visualization based Prediction Models for Respiratory Signals
  • Oct 13, 2022
  • S Jayalakshmy + 2 more

Chronic respiratory disorders (CRDs) and lower res piratory illness are the common widespread infection emphasized by obstruction of flow of air in alveoli. Listening to breathe sounds of the respiratory system is a traditional technique used for diagnosing chronic disorders in patients. However, the outcomes of diagnosis can be sensed only by skilled therapist which imposes limitations on quantifiable results. This urged to the development of technology driven tools for detecting the disorders. Most of the earlier studies focused on analyzing the breathe sounds using different filter banks (FBs) such as Discrete cosine transform (DCT) FBs, Wavelet FBs, Mel FBs and s pectrograms for classification using CNN deep learning models. Owing to the nature of exactly matching the frequency characteristics of FBs with that of human ears, Octave FBs and Cross over FBs are proposed in this work. The filter coefficients extracted using proposed FBs are transformed into s pectrogram time Frequency Visualization and are classified using three Transfer learning (TL) viz. GoogLeNet, S queezeNet and ResNet-50. The comparative results with existing method reveal that the proposed FB produces significant improvement in accuracy of 90.63% and 90.89% for ResNet-50 classifier.

  • Research Article
  • Cite Count Icon 40
  • 10.1016/j.compbiomed.2021.104435
An incremental learning approach to automatically recognize pulmonary diseases from the multi-vendor chest radiographs
  • May 8, 2021
  • Computers in Biology and Medicine
  • Mehreen Sirshar + 3 more

An incremental learning approach to automatically recognize pulmonary diseases from the multi-vendor chest radiographs

  • Research Article
  • Cite Count Icon 13
  • 10.3390/healthcare10060987
Pneumonia Transfer Learning Deep Learning Model from Segmented X-rays.
  • May 26, 2022
  • Healthcare (Basel, Switzerland)
  • Amal H Alharbi + 1 more

Pneumonia is a common disease that occurs in many countries, more specifically, in poor countries. This disease is an obstructive pneumonia which has the same impression on pulmonary radiographs as other pulmonary diseases, which makes it hard to distinguish even for medical radiologists. Lately, image processing and deep learning models are established to rapidly and precisely diagnose pneumonia disease. In this research, we have predicted pneumonia diseases dependably from the X-ray images, employing image segmentation and machine learning models. A public labelled database is utilized with 4000 pneumonia disease X-rays and 4000 healthy X-rays. ImgNet and SqueezeNet are utilized for transfer learning from their previous computed weights. The proposed deep learning models are trained for classifying pneumonia and non-pneumonia cases. The following processes are presented in this paper: X-ray segmentation utilizing BoxENet architecture, X-ray classification utilizing the segmented chest images. We propose the improved BoxENet model by incorporating transfer learning from both ImgNet and SqueezeNet using a majority fusion model. Performance metrics such as accuracy, specificity, sensitivity and Dice are evaluated. The proposed Improved BoxENet model outperforms the other models in binary and multi-classification models. Additionally, the Improved BoxENet has higher speed compared to other models in both training and classification.

  • Research Article
  • Cite Count Icon 24
  • 10.3390/s20236711
Internet of Medical Things: An Effective and Fully Automatic IoT Approach Using Deep Learning and Fine-Tuning to Lung CT Segmentation.
  • Nov 24, 2020
  • Sensors (Basel, Switzerland)
  • Luís Fabrício De Freitas Souza + 7 more

Several pathologies have a direct impact on society, causing public health problems. Pulmonary diseases such as Chronic obstructive pulmonary disease (COPD) are already the third leading cause of death in the world, leaving tuberculosis at ninth with 1.7 million deaths and over 10.4 million new occurrences. The detection of lung regions in images is a classic medical challenge. Studies show that computational methods contribute significantly to the medical diagnosis of lung pathologies by Computerized Tomography (CT), as well as through Internet of Things (IoT) methods based in the context on the health of things. The present work proposes a new model based on IoT for classification and segmentation of pulmonary CT images, applying the transfer learning technique in deep learning methods combined with Parzen’s probability density. The proposed model uses an Application Programming Interface (API) based on the Internet of Medical Things to classify lung images. The approach was very effective, with results above 98% accuracy for classification in pulmonary images. Then the model proceeds to the lung segmentation stage using the Mask R-CNN network to create a pulmonary map and use fine-tuning to find the pulmonary borders on the CT image. The experiment was a success, the proposed method performed better than other works in the literature, reaching high segmentation metrics values such as accuracy of 98.34%. Besides reaching 5.43 s in segmentation time and overcoming other transfer learning models, our methodology stands out among the others because it is fully automatic. The proposed approach has simplified the segmentation process using transfer learning. It has introduced a faster and more effective method for better-performing lung segmentation, making our model fully automatic and robust.

  • Research Article
  • 10.35377/saucis...1582098
Diagnosis of Lichen Sclerosus, Morphea, and Vasculitis Using Deep Learning Techniques on Histopathological Skin Images
  • Jun 30, 2025
  • Sakarya University Journal of Computer and Information Sciences
  • Recep Güler + 3 more

Skin diseases are very common all over the world. The examination can be done by photographing the relevant area or taking a tissue sample to diagnose skin diseases. Examining tissue samples allows examination at the cellular level. This study discussed three skin diseases: lichen sclerosus, morphea, and cutaneous small vessel vasculitis (vasculitis). For this problem, which does not have an open-access dataset in the literature, a dataset consisting of histopathological images belonging to each class was created. Convolutional neural network models were created for this three-class classification problem, and their results were evaluated. In addition, in this problem where it is difficult to obtain sample images, the efficiency of transfer learning methods was evaluated with a limited number of examples. For this purpose, tests were performed with VGG16, ResNet50, InceptionV3, and EfficientNetB4 models, and the results were given. Among all the results, the accuracy value of the VGG16 model was 0.9755 and gave the best result. However, although the accuracy value was quite good, precision, recall, and f1-score metrics values were around 0.65. This shows deficiencies in how often the model correctly predicts the positive class and how well it predicts all positive examples in the dataset.

  • Research Article
  • Cite Count Icon 15
  • 10.7717/peerj-cs.614
Modeling a deep transfer learning framework for the classification of COVID-19 radiology dataset.
  • Aug 3, 2021
  • PeerJ Computer Science
  • Michael Adebisi Fayemiwo + 10 more

Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-Coronavirus-2 or SARS-CoV-2), which came into existence in 2019, is a viral pandemic that caused coronavirus disease 2019 (COVID-19) illnesses and death. Research showed that relentless efforts had been made to improve key performance indicators for detection, isolation, and early treatment. This paper used Deep Transfer Learning Model (DTL) for the classification of a real-life COVID-19 dataset of chest X-ray images in both binary (COVID-19 or Normal) and three-class (COVID-19, Viral-Pneumonia or Normal) classification scenarios. Four experiments were performed where fine-tuned VGG-16 and VGG-19 Convolutional Neural Networks (CNNs) with DTL were trained on both binary and three-class datasets that contain X-ray images. The system was trained with an X-ray image dataset for the detection of COVID-19. The fine-tuned VGG-16 and VGG-19 DTL were modelled by employing a batch size of 10 in 40 epochs, Adam optimizer for weight updates, and categorical cross-entropy loss function. The results showed that the fine-tuned VGG-16 and VGG-19 models produced an accuracy of 99.23% and 98.00%, respectively, in the binary task. In contrast, in the multiclass (three-class) task, the fine-tuned VGG-16 and VGG-19 DTL models produced an accuracy of 93.85% and 92.92%, respectively. Moreover, the fine-tuned VGG-16 and VGG-19 models have MCC of 0.98 and 0.96 respectively in the binary classification, and 0.91 and 0.89 for multiclass classification. These results showed strong positive correlations between the models’ predictions and the true labels. In the two classification tasks (binary and three-class), it was observed that the fine-tuned VGG-16 DTL model had stronger positive correlations in the MCC metric than the fine-tuned VGG-19 DTL model. The VGG-16 DTL model has a Kappa value of 0.98 as against 0.96 for the VGG-19 DTL model in the binary classification task, while in the three-class classification problem, the VGG-16 DTL model has a Kappa value of 0.91 as against 0.89 for the VGG-19 DTL model. This result is in agreement with the trend observed in the MCC metric. Hence, it was discovered that the VGG-16 based DTL model classified COVID-19 better than the VGG-19 based DTL model. Using the best performing fine-tuned VGG-16 DTL model, tests were carried out on 470 unlabeled image dataset, which was not used in the model training and validation processes. The test accuracy obtained for the model was 98%. The proposed models provided accurate diagnostics for both the binary and multiclass classifications, outperforming other existing models in the literature in terms of accuracy, as shown in this work.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon