Abstract

Alopecia Areata (AA) is one of the most widespread diseases, which is generally classified and diagnosed by the Computer Aided Diagnosis (CAD) models. Though it improves AA diagnosis, it has limited interoperability and needs skilled radiologists in medical image interpretation. This problem can be solved by developing Deep Learning (DL) models with CAD for accurately diagnosing AA patients. Many studies engaged only in specific DL models such as Convolutional Neural Network (CNN) in medical imaging, which provides different independent results and many parameters, which limits their generalizability for different datasets. To combat this limitation, this work proposes an Ensemble Pre-Learned DL and an Optimized Long Short-Term Memory (EPL-OLSTM) model for AA classification. Initially, many healthy and AA scalp hair images are separately fed to the pre-learned CNN structures, i.e. AlexNet, ResNet, and InceptionNet to extract the deep features. Then, these features are passed to the OLSTM, in which the Battle Royale Optimization (BRO) algorithm is applied to optimize the LSTM’s hyperparameters. Moreover, the output of the LSTM is classified by the fuzzy-softmax into the associated AA classes, including mild, moderate, and severe. Thus, this model can increase the accuracy of differentiating between healthy and multiple AA scalp hair classes. Finally, an extensive experiment using the Figaro1k (for healthy scalp hair images) and DermNet (for different AA scalp hair images) datasets demonstrates that the EPL-OLSTM achieves 93.1% accuracy compared to the state-of-the-art DL models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call