Abstract

Compared to conventional machine learning techniques, extreme learning machine (ELM) which trains single-hidden-layer feedforward neural networks (SLFNs) shows faster-learning speed and better generalization performances. However, like most representative supervised learning algorithms, ELM tends to produce biased decision models when datasets are imbalanced. In this paper, two-stage weighted regularized ELM is proposed to address the aforementioned issue. The original regularized ELM (RELM) was proposed to handle adverse effects of outliers but not target the imbalanced learning problem. So we proposed a new weighted regularized ELM (WRELM) for class imbalance learning (CIL) in the first stage. Different from the existing weighted ELM which only considers the class distribution of datasets, the proposed algorithm also puts more focus on hard, misclassified samples in the second stage. The focal loss function is adopted to update weight by decreasing the weight of well-classified samples to focus more attention on the error of difficult samples. The final decision target is determined by the winner-take-all method. We assess the proposed method on 25 binary datasets and 10 multiclass datasets by 5-folder cross validations. The results indicate the proposed algorithm is an efficient method for CIL and exceed other CIL algorithms based on ELM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call