Abstract

SummaryDynamical systems often generate distinct outputs according to different initial conditions, and one can infer the corresponding input configuration given an output. This property captures the essence of information encoding and decoding. Here, we demonstrate the use of self-organized patterns that generate high-dimensional outputs, combined with machine learning, to achieve distributed information encoding and decoding. Our approach exploits a critical property of many natural pattern-formation systems: in repeated realizations, each initial configuration generates similar but not identical output patterns due to randomness in the patterning process. However, for sufficiently small randomness, different groups of patterns that arise from different initial configurations can be distinguished from one another. Modulating the pattern-generation and machine learning model training can tune the tradeoff between encoding capacity and security. We further show that this strategy is scalable by implementing the encoding and decoding of all characters of the standard English keyboard.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.