Abstract

Machine learning has been a focus research topic of superior tasks in many real-world applications. One of the famous preferred system is neural network. This approach has been invented for decades but becomes popular recently due to its satisfied results in many applications. The success of applying neural network involves model training which conventionally uses backpropagation method. However, it has many drawbacks. In recent decades, Extreme learning machine (ELM) was first proposed for training single-hidden layer feedforward neural network (SLFN). It optimizes training error by utilizing the whole training dataset with a one-shot calculation. However, for the training in datasets with large number of input features or high-dimensional datasets, original ELM encounters many difficulties. One of them is that the original ELM has no learning process from an input layer. This lead to an incomplete representation of data when it is transferred from one layer to another. Another difficulty involves training instability which causes fluctuation in testing accuracy. This is because networks' input weights are randomly generated. To circumvent these difficulties, the imposing architecture, namely Extended Extreme Learning Machine (X-ELM), is proposed. X-ELM uses ELM as an extension part in order to predict the outputs based on ensemble approach. The proposed framework extends the usage of ELM to apply to more complex network structures, such as networks with multiple hidden layers or networks with multiple computing systems. The proposed framework is applied to vehicles characteristic classifications' datasets. The experimental results show that X-ELM achieves better testing accuracy than of ELM in real-world applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call