Abstract

The Extreme Learning Machine (ELM) is a highly efficient model for real-time network retraining due to its fast learning speed, unlike traditional machine learning methods. However, the performance of ELM can be negatively impacted by the random initialization of weights and biases. Moreover, poor input feature quality can further degrade performance, particularly with complex visual data. To overcome these issues, this paper proposes optimizing the input features as well as the initial weights and biases. We combine both Convolutional Neural Network (CNN) and Convolutional AutoEncoder (CAE) extracted features to optimize the quality of the input features. And we use our hybrid Grey Wolf Optimizer-Multi-Verse Optimizer (GWO-MVO) metaheuristic for initializing weights and biases by applying four fitness functions based on: the norm of the output weights, the error rate on the training set, and the error rate on the validation set. Our method is evaluated on image classification tasks using two benchmark datasets: CIFAR-10 and CIFAR-100. Since image quality may vary in real-world applications, we trained and tested our models on the dataset’s original and noisy versions. The results demonstrate that our method provides a robust and efficient alternative for image classification tasks, offering improved accuracy and reduced overfitting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call