Abstract

Neonatal jaundice is a common condition worldwide. Failure of timely diagnosis and treatment can lead to death or brain injury. Current diagnostic approaches include a painful and time-consuming invasive blood test and non-invasive tests using costly transcutaneous bilirubinometers. Since periodic monitoring is crucial, multiple efforts have been made to develop non-invasive diagnostic tools using a smartphone camera. However, existing works rely either on skin or eye images using statistical or traditional machine learning methods. In this paper, we adopt a deep transfer learning approach based on eye, skin, and fused images. We also trained well-known traditional machine learning models, including multi-layer perceptron (MLP), support vector machine (SVM), decision tree (DT), and random forest (RF), and compared their performance with that of the transfer learning model. We collected our dataset using a smartphone camera. Moreover, unlike most of the existing contributions, we report accuracy, precision, recall, f-score, and area under the curve (AUC) for all the experiments and analyzed their significance statistically. Our results indicate that the transfer learning model performed the best with skin images, while traditional models achieved the best performance with eyes and fused features. Further, we found that the transfer learning model with skin features performed comparably to the MLP model with eye features.

Highlights

  • And in contrast to previous studies such as [17], the transfer learning model achieved the best performance with skin features rather than eye features. t-test showed that the performance of the model with skin features had significantly improved over eye features at p < 0.05 with respect to accuracy, recall, F1 score, and area under the curve (AUC) (p = 0.04 for all performance measures), while no significant performance improvement was observed with respect to precision

  • Our results suggest that traditional machine learning models trained on eye features performed significantly better than when trained on skin features, which mirrors findings of previous studies [17]

  • The goal of this work was to investigate the effectiveness of transfer learning in diagnosing neonatal jaundice using different types of features, namely skin, eye, and fusion of skin and eyes features

Read more

Summary

Introduction

Electronic Health (eHealth) is a relatively recent interdisciplinary research area that applies information technologies to improve healthcare processes and services. One of the critical areas in healthcare where eHealth has been successfully applied is diagnosis, where the symptoms are examined by a doctor to identify the illness or any other health problems. Artificial intelligence, machine learning and deep learning, has contributed to tackling multiple challenges in the diagnosis of different diseases. Since their emergence, a wide range of research has been carried out with breakthrough results [2,3,4,5]. Images of the infected area are collected, and computer vision and image processing algorithms and techniques are applied to extract features that are fed to the diagnostic models

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call