Abstract

In experimental psychology, artificial grammars, generated by directed graphs, are used to test the ability of subjects to implicitly learn the structure of complex rules. We introduce the necessary notation and mathematics to view an artificial grammar as the sequence space of a dynamical system. The complexity of the artificial grammar is equated with the topological entropy of the dynamical system and is computed by finding the largest eigenvalue of an associated transition matrix. We develop the necessary mathematics and include relevant examples (one from the implicit learning literature) to show that topological entropy is easy to compute, well defined, and intuitive and, thereby, provides a quantitative measure of complexity that can be used to compare data across different implicit learning experiments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.