Abstract

Neural networks based on high-dimensional random feature generation have become popular under the notions extreme learning machine (ELM) and reservoir computing (RC). We provide an in-depth analysis of such networks with respect to feature selection, model complexity, and regularization. Starting from an ELM, we show how recurrent connections increase the effective complexity leading to reservoir networks. On the contrary, intrinsic plasticity (IP), a biologically inspired, unsupervised learning rule, acts as a task-specific feature regularizer, which tunes the effective model complexity. Combing both mechanisms in the framework of static reservoir computing, we achieve an excellent balance of feature complexity and regularization, which provides an impressive robustness to other model selection parameters like network size, initialization ranges, or the regularization parameter of the output learning. We demonstrate the advantages on several synthetic data as well as on benchmark tasks from the UCI repository providing practical insights how to use high-dimensional random networks for data processing.

Highlights

  • In the last decade, machine learning techniques based on random projections have attracted a lot of attention because in principle they allow for very efficient processing of large and high-dimensional data sets [1]

  • Intrinsic plasticity (IP), a biologically inspired, unsupervised learning rule, acts as a task-specific feature regularizer, which tunes the effective model complexity. Combing both mechanisms in the framework of static reservoir computing, we achieve an excellent balance of feature complexity and regularization, which provides an impressive robustness to other model selection parameters like network size, initialization ranges, or the regularization parameter of the output learning

  • Picking up the discussion on model selection for the extreme learning machine (ELM) in Section 2.1, we show that the combination of recurrence and feature regularization via intrinsic plasticity (IP) makes the networks less dependent on the specific choice of other model selection parameters and the random initialization

Read more

Summary

Introduction

Machine learning techniques based on random projections have attracted a lot of attention because in principle they allow for very efficient processing of large and high-dimensional data sets [1] These approaches randomly initialize the free parameters of the feature generating part of a data processing model and restrict learning to linear methods for obtaining a suitable readout function. Input-tuned reservoir networks that are less dependent on the random initialization and less sensitive to the choice of the output regularization parameter are obtained We confirm this in experiments, where we observe constantly good performance over a wide range of network initialization and learning parameters

Baseline
Model Selection for the ELM
Reservoir Networks as Natural Extension of the ELM
Recurrence Increases the Effective Model Complexity
Recurrence Enhances the Spatial Encoding of Static Inputs
Feature Regularization with Intrinsic Plasticity
Intrinsic Plasticity Revisited
Regulating ELM Complexity through
Intrinsic Plasticity in Combination with Recurrence
Increased Model Complexity for More
Parameter Robustness
Discussion
Regularization Theory
Attractor Based Reservoir Computing
Mexican Hat Regression Task
Two-Dimensional Sine-Wave Task
Findings
Regression Tasks
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call