Abstract
Introduction: Thalassemia is widely recognized as a significant global health concern in the present-day society. Thalassemia and structural HB abnormalities are the most common monogenic disorders worldwide. If recognition is not conducted promptly, it might result in fatalities within certain categories. Therefore, it is necessary to create computational approaches or methods to identify thalassemia in its early phases. Machine Learning (ML) and Deep Learning (DL) has become the crucial method for identifying thalassemia through the use of either inherited or newly developed models. Objectives: This article provides a thorough and organized analysis of the latest developments utilizing ML and DL along with some other ensemble models for detecting Thalassemia, a hereditary blood condition. Methods: This study specifically examined recent research conducted within the past seven years through consideration of good published review, emphasizing notable technical progress in ML and DL based models. Results: The evaluation of the work reviewed has also been done considering the classifiers used, evaluation metrics and the blood parameters in this paper. Conclusions: This study seeks to offer helpful insights into current research trends and forthcoming prospects in utilizing ML and DL for the identification and treatment of Thalassemia as well as related haematological sickness.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have