Abstract

Fully connected multi layer neural networks such as Deep Boltzmann Machines (DBM) performs better than fully connected single layer neural networks in image classification tasks and has a smaller number of hidden layer neurons than Extreme Learning Machine (ELM) based fully connected multi layer neural networks such as Multi Layer ELM (MLELM) and Hierarchical ELM (H-ELM) However, ML-ELM and H-ELM has a smaller training time than DBM. This paper introduces a fully connected multi layer neural network referred to as Multi Layer Multi Objective Extreme Learning Machine (MLMO-ELM) which uses a multi objective formulation to pass the label and non-linear information in order to learn a network model which has a similar number of hidden layer parameters as DBM and smaller training time than DBM. The experimental results show that MLMO-ELM outperforms DBM, ML-ELM and H-ELM on OCR and NORB datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call