Abstract

Achieving optimal machine learning model performance is often hindered by the limited availability of diverse datasets, a challenge exacerbated by small sample sizes in real-world scenarios. In this study, we address this critical issue in classification tasks by integrating the Dropout technique into the Extreme Learning Machine (ELM) classifier. Our research underscores the effectiveness of Dropout-ELM in mitigating overfitting, especially when data is scarce, leading to enhanced generalization capabilities. Through extensive experiments on synthetic and real-world datasets, our findings consistently demonstrate that Dropout-ELM outperforms traditional ELM, yielding significant accuracy improvements ranging from 0.19% to 16.20%. By strategically implementing dropout during training, we promote the development of robust models that reduce reliance on specific features or neurons, resulting in increased adaptability and resilience across diverse datasets. Ultimately, Dropout-ELM emerges as a potent tool to counter overfitting and bolster the performance of ELM-based classifiers, particularly in scenarios with limited data. Its established efficacy positions it as a valuable asset for enhancing the reliability and generalization of machine learning models, providing a robust solution to the challenges posed by constrained training data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call