Abstract
Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems.
Highlights
The digital computation paradigm has become so dominant, that in the minds of many, the word digital is implicitly assumed whenever computation is mentioned
In particular we extend the gradient descent training algorithms known as Real-Time Recurrent Learning (RTRL), and Backpropagation through time (BPTT), respectively
Our extensions of BPTT and RTRL are capable of taking into account and exploiting the long-term dynamic effects of the systems under consideration
Summary
The digital computation paradigm has become so dominant, that in the minds of many, the word digital is implicitly assumed whenever computation is mentioned. This is mainly due to the fact that digital computation is extremely robust against variability and noise. Analogue computers carry the potential to directly exploit the way the dynamics of physical systems respond to external stimuli, continuously transforming their real-valued state. This requires the selection of a physical system with natural dynamics that roughly match the computational requirements of a given task. This work was originally adopted mainly by the biological community (to study morphogenesis), but it later became the basis for, e.g., Adamatzki’s recent work on reactiondiffusion computers [3]
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have