Abstract

A Multilayer Perceptron (MLP) is a feedforward neural network model consisting of one or more hidden layers between the input and output layers. MLPs have been successfully applied to solve a wide range of problems in the fields of neuroscience, computational linguistics, and parallel distributed processing. While MLPs are highly successful in solving problems which are not linearly separable, two of the biggest challenges in their development and application are the local-minima problem and the problem of slow convergence under big data challenge. In order to tackle these problems, this study proposes a Hybrid Chaotic Biogeography-Based Optimization (HCBBO) algorithm for training MLPs for big data analysis and processing. Four benchmark datasets are employed to investigate the effectiveness of HCBBO in training MLPs. The accuracy of the results and the convergence of HCBBO are compared to three well-known heuristic algorithms: (a) Biogeography-Based Optimization (BBO), (b) Particle Swarm Optimization (PSO), and (c) Genetic Algorithms (GA). The experimental results show that training MLPs by using HCBBO is better than the other three heuristic learning approaches for big data processing.

Highlights

  • The term big data [1,2,3] had been developed to describe the phenomenon of the increasing size of massive datasets in scientific experiments, financial trading, and networks

  • While the two-layered feedforward neural networks (FNNs) is the most popular neural network used in practical applications, it is not suitable for solving nonlinear problems [7, 8]

  • Mirjalili et al [13] employed the basic Biogeography-Based Optimization (BBO) algorithm to train an Multilayer Perceptron (MLP) using the first approach, and the results demonstrate that BBO is significantly better at avoiding local minima compared to Particle Swarm Optimization (PSO), Genetic Algorithms (GA), and Ant Colony Optimization (ACO) algorithms

Read more

Summary

Introduction

The term big data [1,2,3] had been developed to describe the phenomenon of the increasing size of massive datasets in scientific experiments, financial trading, and networks. Since an MLP can consist of multiple local minima, it is easy to be trapped in Scientific Programming one of them rather than converging on the global minimum This is a common problem in most gradient-based learning approaches such as backpropagation (BP) based NNs [12]. Research [13] used 11 standard datasets to provide a comprehensive test bed for investigating the abilities of the BBO algorithm in training MLPs. In this paper, we propose a hybrid BBO with chaotic maps trainer (HCBBO) for MLPs. Our approach employs chaos theory to improve the performance of the BBO with very little computational burden. The migration and mutation mechanisms are combined to enhance the exploration and exploitation abilities of BBO, and a novel migration operator is proposed to improve BBO’s performance in training MLPs. The rest of this paper is organized as follows.

Review of the MLP Notation
The Proposed Hybrid BBO for Training an MLP
The Proposed Hybrid CBBO Algorithm for Training an MLP
Experimental Analysis
Findings
Discussion and Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call