Abstract

The Reservoir Computing (RC) framework is said to have the potential to transfer onto any input-driven dynamical system, provided two properties are present: (i) a fading memory, and (ii) input separability. A typical reservoir consists of a fixed network of recurrently connected processing units; however recent hardware implementations have shown reservoirs are not ultimately bound by this architecture. Previously, we have demonstrated how the RC framework can be applied to randomly-formed carbon nanotube composites to solve computational tasks. Here, we apply the RC framework to an evolvable substrate and compare performance to an already established in materia training technique, referred to as evolution in materia. The results show that by adding the programmable reservoir layer, reservoir computing in materia can significantly outperform the original evolution in materia implementation. This suggests the RC framework offers improved performance, even across non-temporal tasks, when combined with the evolution in materia technique.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.