Abstract
AbstractMulti-Layer Perceptrons (MLP) trained using Back Propagation (BP) and Extreme Learning Machine (ELM) methodologies on highly non-linear, two-dimensional functions are compared and benchmarked. To ensure validity, identical numbers of trainable parameters were used for each approach. BP training combined with an MLP structure used many hidden layers, while ELM training can only be used on the Single Layer, Feed Forward (SLFF) neural network topology. For the same number of trainable parameters, ELM training was more efficient, using less time to train the network, while also being more effective in terms of the final value of the loss function.KeywordsExtreme learning machineBack PropagationNeural networksNon-linear function approximation
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have