Abstract

Random-based learning paradigms exhibit efficient training algorithms and remarkable generalization performances. However, the computational cost of the training procedure scales with the cube of the number of hidden neurons. The paper presents a novel training procedure for random-based neural networks, which combines ensemble techniques and dropout regularization. This limits the computational complexity of the training phase without affecting classification performance significantly; the method best fits Internet of Things (IoT) applications. In the training algorithm, one first generates a pool of random neurons; then, an ensemble of independent sub-networks (each including a fraction of the original pool) is trained; finally, the sub-networks are integrated into one classifier. The experimental validation compared the proposed approach with state-of-the-art solutions, by taking into account both generalization performance and computational complexity. To verify the effectiveness in IoT applications, the training procedures were deployed on a pair of commercially available embedded devices. The results showed that the proposed approach overall improved accuracy, with a minor degradation in performance in a few cases. When considering embedded implementations as compared with conventional architectures, the speedup of the proposed method scored up to 20× in IoT devices.

Highlights

  • Edge computing and Internet of Things (IoT) are crucial areas in modern electronics [26, 42], involving important domains such as healthcare [39, 41], intelligent transportation [40], and multimedia communications [38]

  • The paper presents a novel training procedure for random-based neural networks, which combines ensemble techniques and dropout regularization. This limits the computational complexity of the training phase without affecting classification performance significantly; the method best fits Internet of Things (IoT) applications

  • To verify the effectiveness in IoT applications, the training procedures were deployed on a pair of commercially available embedded devices

Read more

Summary

Introduction

Edge computing and Internet of Things (IoT) are crucial areas in modern electronics [26, 42], involving important domains such as healthcare [39, 41], intelligent transportation [40], and multimedia communications [38]. Deep learning paradigms [14] prove effective in those applications, but resource-constrained devices cannot support the training process [19], and even deploying trained models in embedded systems still remains a challenging task Traditional approaches such as single-layer feed-forward neural networks (SLFNNs) and support vector machines (SVMs) can be trained by involving a relatively small amount of computational resources. The existing approaches in the literature aimed to improve the generalization capabilities of RBNs, by including some strategy to select effective neurons in the eventual predictors This often went with a parallel increase in computational costs. Dropout regularization is a popular technique for deep network training [43]: the underlying idea is that a network should represent an input sample in several ways, yielding a robust representation of the sample itself This is attained by switching off a varying subset of neurons during each iteration of the gradient descent optimization algorithm. To prove the effectiveness of the electronic design, the training algorithm was implemented on a pair of low-power, resource-constrained devices, namely the Broadcom BCM2837B0 Quad–core Cortex-A53, and an Allwinner H3, Quad–core Cortex-A7

Contribution
Extreme learning machine
Dropout regularization
Dropout extreme learning machine
Dropout and local ensemble for efficient training
Output compute linear predictor in the space
Analysis of computational cost
Input data remapping
Optimization
Model selection
Overall computational cost
Comparison with related works
Generalization performances
Standard machine learning benchmarks
Internet of Things benchmarks
A summary of generalization results
Implementation analysis
Conclusions
Findings
Compliance with ethical standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call