Machine Learning Using Cellular Automata Based Feature Expansion and Reservoir Computing
Machine Learning Using Cellular Automata Based Feature Expansion and Reservoir Computing
- Research Article
123
- 10.1063/5.0024890
- Jan 1, 2021
- Chaos: An Interdisciplinary Journal of Nonlinear Science
Machine learning has become a widely popular and successful paradigm, especially in data-driven science and engineering. A major application problem is data-driven forecasting of future states from a complex dynamical system. Artificial neural networks have evolved as a clear leader among many machine learning approaches, and recurrent neural networks are considered to be particularly well suited for forecasting dynamical systems. In this setting, the echo-state networks or reservoir computers (RCs) have emerged for their simplicity and computational complexity advantages. Instead of a fully trained network, an RC trains only readout weights by a simple, efficient least squares method. What is perhaps quite surprising is that nonetheless, an RC succeeds in making high quality forecasts, competitively with more intensively trained methods, even if not the leader. There remains an unanswered question as to why and how an RC works at all despite randomly selected weights. To this end, this work analyzes a further simplified RC, where the internal activation function is an identity function. Our simplification is not presented for the sake of tuning or improving an RC, but rather for the sake of analysis of what we take to be the surprise being not that it does not work better, but that such random methods work at all. We explicitly connect the RC with linear activation and linear readout to well developed time-series literature on vector autoregressive (VAR) averages that includes theorems on representability through the Wold theorem, which already performs reasonably for short-term forecasts. In the case of a linear activation and now popular quadratic readout RC, we explicitly connect to a nonlinear VAR, which performs quite well. Furthermore, we associate this paradigm to the now widely popular dynamic mode decomposition; thus, these three are in a sense different faces of the same thing. We illustrate our observations in terms of popular benchmark examples including Mackey-Glass differential delay equations and the Lorenz63 system.
- Research Article
- 10.31861/sisiot2023.2.02009
- Dec 30, 2023
- Security of Infocommunication Systems and Internet of Things
In the dynamic landscape of information security and telecommunications, this paper delves into the multifaceted realm of machine-learning-based methods, with a particular focus on their application in chaotic systems. An informative introduction sets the way for a thorough examination of the major benefits provided by reservoir computing (RC) and machine learning (ML) in telecommunications. The first segment of this study scrutinizes the role of machine learning in fortifying information security. With the ever-evolving nature of cyber threats, understanding the nuances of ML becomes imperative. The article highlights key advancements and features in ML that contribute to bolstering data security, providing a nuanced perspective on its efficacy in addressing the intricate challenges posed by contemporary paradigms for information security. Moving forward, the discussion expands to reservoir computing and its implications in telecommunications. Reservoir computing, with its unique approach to processing information through dynamic systems, has emerged as a promising technique. The article dissects its applications in the telecommunications sector, shedding light on how reservoir computing augments information processing and transmission efficiency within complex networks. A pivotal aspect of this paper is the exploration of the double-reservoir solution — a cutting-edge approach that combines the strengths of reservoir computing for enhanced performance. This innovative solution is dissected in detail, uncovering its prospects and the challenges it presents. The incorporation of double-reservoir solutions into chaotic systems represents a paradigm shift in the optimization of system dynamics and represents a major advancement in tackling important telecommunications difficulties. Yet not just this paper offers insights into this solution, it fairly describes possible challenges with implementation of such a model. It is to be taken into consideration, hence there is no ‘perfect’ solution for such a complex problem. This paper provides a comprehensive view of machine-learning-based solutions for information security and telecommunications challenges. By unraveling the capabilities of both machine learning and reservoir computing, it unlocks avenues for further research and development in harnessing these technologies to fortify the foundations of secure and efficient telecommunications in the face of constantly developing threats. The insights presented herein lay the groundwork for future innovations, urging researchers and practitioners to delve deeper into the synergy of machine learning and chaotic systems for transformative advancements in these critical domains.
- Research Article
26
- 10.1088/2632-072x/ac0b00
- Jul 2, 2021
- Journal of Physics: Complexity
An emerging paradigm for predicting the state evolution of chaotic systems is machine learning with reservoir computing, the core of which is a dynamical network of artificial neurons. Through training with measured time series, a reservoir machine can be harnessed to replicate the evolution of the target chaotic system for some amount of time, typically about half dozen Lyapunov times. Recently, we developed a reservoir computing framework with an additional parameter channel for predicting system collapse and chaotic transients associated with crisis. It was found that the crisis point after which transient chaos emerges can be accurately predicted. The idea of adding a parameter channel to reservoir computing has also been used by others to predict bifurcation points and distinct asymptotic behaviors. In this paper, we address three issues associated with machine-generated transient chaos. First, we report the results from a detailed study of the statistical behaviors of transient chaos generated by our parameter-aware reservoir computing machine. When multiple time series from a small number of distinct values of the bifurcation parameter, all in the regime of attracting chaos, are deployed to train the reservoir machine, it can generate the correct dynamical behavior in the regime of transient chaos of the target system in the sense that the basic statistical features of the machine generated transient chaos agree with those of the real system. Second, we demonstrate that our machine learning framework can reproduce intermittency of the target system. Third, we consider a system for which the known methods of sparse optimization fail to predict crisis and demonstrate that our reservoir computing scheme can solve this problem. These findings have potential applications in anticipating system collapse as induced by, e.g., a parameter drift that places the system in a transient regime.
- Research Article
16
- 10.1063/5.0033870
- Jan 1, 2021
- Chaos (Woodbury, N.Y.)
Can a neural network trained by the time series of system A be used to predict the evolution of system B? This problem, knowing as transfer learning in a broad sense, is of great importance in machine learning and data mining yet has not been addressed for chaotic systems. Here, we investigate transfer learning of chaotic systems from the perspective of synchronization-based state inference, in which a reservoir computer trained by chaotic system A is used to infer the unmeasured variables of chaotic system B, while A is different from B in either parameter or dynamics. It is found that if systems A and B are different in parameter, the reservoir computer can be well synchronized to system B. However, if systems A and B are different in dynamics, the reservoir computer fails to synchronize with system B in general. Knowledge transfer along a chain of coupled reservoir computers is also studied, and it is found that, although the reservoir computers are trained by different systems, the unmeasured variables of the driving system can be successfully inferred by the remote reservoir computer. Finally, by an experiment of chaotic pendulum, we demonstrate that the knowledge learned from the modeling system can be transferred and used to predict the evolution of the experimental system.
- Research Article
11
- 10.1016/j.neunet.2023.10.054
- Nov 7, 2023
- Neural Networks
Recent work has shown that machine learning (ML) models can skillfully forecast the dynamics of unknown chaotic systems. Short-term predictions of the state evolution and long-term predictions of the statistical patterns of the dynamics (“climate”) can be produced by employing a feedback loop, whereby the model is trained to predict forward only one time step, then the model output is used as input for multiple time steps. In the absence of mitigating techniques, however, this feedback can result in artificially rapid error growth (“instability”). One established mitigating technique is to add noise to the ML model training input. Based on this technique, we formulate a new penalty term in the loss function for ML models with memory of past inputs that deterministically approximates the effect of many small, independent noise realizations added to the model input during training. We refer to this penalty and the resulting regularization as Linearized Multi-Noise Training (LMNT). We systematically examine the effect of LMNT, input noise, and other established regularization techniques in a case study using reservoir computing, a machine learning method using recurrent neural networks, to predict the spatiotemporal chaotic Kuramoto–Sivashinsky equation. We find that reservoir computers trained with noise or with LMNT produce climate predictions that appear to be indefinitely stable and have a climate very similar to the true system, while the short-term forecasts are substantially more accurate than those trained with other regularization techniques. Finally, we show the deterministic aspect of our LMNT regularization facilitates fast reservoir computer regularization hyperparameter tuning.
- Research Article
14
- 10.1155/2018/6953836
- Jan 1, 2018
- Complexity
We investigate the ways in which a machine learning architecture known as Reservoir Computing learns concepts such as “similar” and “different” and other relationships between image pairs and generalizes these concepts to previously unseen classes of data. We present two Reservoir Computing architectures, which loosely resemble neural dynamics, and show that a Reservoir Computer (RC) trained to identify relationships between image pairs drawn from a subset of training classes generalizes the learned relationships to substantially different classes unseen during training. We demonstrate our results on the simple MNIST handwritten digit database as well as a database of depth maps of visual scenes in videos taken from a moving camera. We consider image pair relationships such as images from the same class; images from the same class with one image superposed with noise, rotated 90°, blurred, or scaled; images from different classes. We observe that the reservoir acts as a nonlinear filter projecting the input into a higher dimensional space in which the relationships are separable; i.e., the reservoir system state trajectories display different dynamical patterns that reflect the corresponding input pair relationships. Thus, as opposed to training in the entire high‐dimensional reservoir space, the RC only needs to learns characteristic features of these dynamical patterns, allowing it to perform well with very few training examples compared with conventional machine learning feed‐forward techniques such as deep learning. In generalization tasks, we observe that RCs perform significantly better than state‐of‐the‐art, feed‐forward, pair‐based architectures such as convolutional and deep Siamese Neural Networks (SNNs). We also show that RCs can not only generalize relationships, but also generalize combinations of relationships, providing robust and effective image pair classification. Our work helps bridge the gap between explainable machine learning with small datasets and biologically inspired analogy‐based learning, pointing to new directions in the investigation of learning processes.
- Research Article
11
- 10.1016/j.neunet.2021.06.031
- Jul 6, 2021
- Neural Networks
Transfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing models
- Conference Article
4
- 10.1109/ijcnn.2019.8852280
- Jul 1, 2019
Reservoir computing is a computational framework which is originally based on software recurrent neural networks and recently achieved with physical systems as well. In our previous paper [Nakane et al., IEEE ACCESS vol. 6, p. 4462, 2018], we have proposed a spin-wave-based reservoir computing device with multiple input/output electrodes, and have demonstrated its high generalization ability in the estimation of input-signal parameters performed by the spin-wave-based reservoir computing. To successfully execute many types of estimation tasks with machine learning, it is necessary to investigate fundamental properties of spin-wave-based reservoir computing, particularly the relation between its input and output. From this background, the purposes of this work are to demonstrate a different estimation task with pulse input signals and to analyze the properties of spin waves which have important roles in the task. We first describe our approach to obtain spin waves with the features useful for reservoir computing, by considering the fundamental properties of spin waves and feasible device technologies. Then, we investigate detailed characteristics of locally-excited spin waves in a garnet film by micromagnetics simulation. Using the resultant spin waves, we demonstrate a pulse interval estimation task, and achieve a high diversity in the time-sequential signals generated by the spin-wave-based reservoir. The spin-wave-based device is a highly promising hardware for next-generation machine-learning electronics.
- Conference Article
1
- 10.1109/icecs202256217.2022.9971045
- Oct 24, 2022
In reservoir computing, dynamical systems are used to drive state-of-the-art machine learning with small training sets and minimal computing resources. Neuromorphic (brain-inspired) computing pose to further improve reservoir computing with energy-efficient spiking neural implementations. Here we propose an analog circuit design for reservoir computing using OZ spiking neurons, STDP (Spike-timing-dependent plasticity) synapses, and learning PES (prescribed error sensitivity) circuitry. We evaluated our design on a small scale using the Iris flower data set, demonstrating the potential application of neuromorphic analog hardware in reservoir computing.
- Research Article
1
- 10.1140/epjs/s11734-022-00693-5
- Oct 10, 2022
- The European Physical Journal. Special Topics
Precipitation as the meteorological data is closely related to human life. For this reason, we hope to propose new method to forecast it more accurately. In this article, we aim to forecast precipitation by reservoir computing with some additional processes. The concept of reservoir computing emerged from a specific machine learning paradigm, which is characterized by a three-layered architecture (input, reservoir and output layers). What is different from other machine learning algorithms is that only the output layer is trained and optimized for particular tasks. Since the precipitation data is non-smooth, its prediction is very difficult via the classical methods of prediction of the nonlinear time series. For the predicated precipitation data, we take its first-order moving average to make it smoother, then take the logarithm of smoothed nonzero data and the same negative constant for smoothed zero data to obtain a new series. We train the obtained series by reservoir computing and get the predicated result of its future. After taking its exponent function, the predicated data for original precipitation data are obtained. It indicates that reservoir computing combined with other processes can potentially bring about the accurate precipitation forecast.
- Conference Article
3
- 10.1109/cse-euc-dcabes.2016.230
- Aug 1, 2016
The aim of this presentation is to show how various ideas coming from the nonlinear stability theory of functional differential systems, stochastic modeling, and machine learning, can be put together in order to create an approximating model that explains the working mechanisms behind a certain type of reservoir computers. Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus on time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. The reservoir design problem is addressed, which remains the biggest challenge in the applicability of this information processing scheme. Our results use the information available regarding the optimal reservoir working regimes in order to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.
- Research Article
515
- 10.1063/1.5010300
- Dec 1, 2017
- Chaos: An Interdisciplinary Journal of Nonlinear Science
We use recent advances in the machine learning area known as "reservoir computing" to formulate a method for model-free estimation from data of the Lyapunov exponents of a chaotic process. The technique uses a limited time series of measurements as input to a high-dimensional dynamical system called a "reservoir." After the reservoir's response to the data is recorded, linear regression is used to learn a large set of parameters, called the "output weights." The learned output weights are then used to form a modified autonomous reservoir designed to be capable of producing an arbitrarily long time series whose ergodic properties approximate those of the input signal. When successful, we say that the autonomous reservoir reproduces the attractor's "climate." Since the reservoir equations and output weights are known, we can compute the derivatives needed to determine the Lyapunov exponents of the autonomous reservoir, which we then use as estimates of the Lyapunov exponents for the original input generating system. We illustrate the effectiveness of our technique with two examples, the Lorenz system and the Kuramoto-Sivashinsky (KS) equation. In the case of the KS equation, we note that the high dimensional nature of the system and the large number of Lyapunov exponents yield a challenging test of our method, which we find the method successfully passes.
- Research Article
1
- 10.1016/j.cej.2024.155651
- Sep 11, 2024
- Chemical Engineering Journal
Highly textured CMOS-compatible hexagonal boron nitride-based neuristor for reservoir computing
- Book Chapter
1
- 10.1007/978-3-030-90539-2_4
- Jan 1, 2021
The mathematical concept of chaos was introduced by Edward Lorenz in the early 1960s while attempting to represent atmospheric convection through a two-dimensional fluid flow with an imposed temperature difference in the vertical direction. Since then, chaotic dynamical systems are accepted as the foundation of the meteorological sciences and represent an indispensable testbed for weather and climate forecasting tools. Operational weather forecasting platforms rely on costly partial differential equations (PDE)-based models that run continuously on high performance computing architectures. Machine learning (ML)-based low-dimensional surrogate models can be viewed as a cost-effective solution for such high-fidelity simulation platforms. In this work, we propose an ML method based on Reservoir Computing - Echo State Neural Network (RC-ESN) to accurately predict evolutionary states of chaotic systems. We start with the baseline Lorenz-63 and 96 systems and show that RC-ESN is extremely effective in consistently predicting time series using Pearson’s cross correlation similarity measure. RC-ESN can accurately forecast Lorenz systems for many Lyapunov time units into the future. In a practical numerical example, we applied RC-ESN combined with space-only proper orthogonal decomposition (POD) to build a reduced order model (ROM) that produces sequential short-term forecasts of pollution dispersion over the continental USA region. We use GEOS-CF simulated data to assess our RC-ESN ROM. Numerical experiments show reasonable results for such a highly complex atmospheric pollution system.
- Research Article
1
- 10.1109/tnnls.2022.3172586
- Jun 1, 2022
- IEEE Transactions on Neural Networks and Learning Systems
With the penetration of artificial intelligence (AI) technology into industrial applications, not only computational effectiveness but also computational efficiency in machine learning (ML) methods has been increasingly demanded. Reservoir computing (RC) is an ML framework leveraging a dynamic <i>reservoir</i> for a nonlinear transformation of sequential inputs and a <i>readout</i> for mapping the reservoir state to a desired output. Since only the readout is trained with a simple learning algorithm, RC has attracted much attention as a promising approach to enhance compatibility between high computational performance and low learning cost. In addition, recent studies on physical reservoirs implemented with various physical substrates have boosted the potential of RC in the development of effective and efficient AI hardware. Therefore, it is time to further explore the new frontiers in extremely efficient RC.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.