Abstract
Achieving optimal machine learning model performance is often hindered by the limited availability of diverse datasets, a challenge exacerbated by small sample sizes in real-world scenarios. In this study, we address this critical issue in classification tasks by integrating the Dropout technique into the Extreme Learning Machine (ELM) classifier. Our research underscores the effectiveness of Dropout-ELM in mitigating overfitting, especially when data is scarce, leading to enhanced generalization capabilities. Through extensive experiments on synthetic and real-world datasets, our findings consistently demonstrate that Dropout-ELM outperforms traditional ELM, yielding significant accuracy improvements ranging from 0.19% to 16.20%. By strategically implementing dropout during training, we promote the development of robust models that reduce reliance on specific features or neurons, resulting in increased adaptability and resilience across diverse datasets. Ultimately, Dropout-ELM emerges as a potent tool to counter overfitting and bolster the performance of ELM-based classifiers, particularly in scenarios with limited data. Its established efficacy positions it as a valuable asset for enhancing the reliability and generalization of machine learning models, providing a robust solution to the challenges posed by constrained training data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Applied Mathematics and Computational Intelligence (AMCI)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.