Abstract

Learning complex symbolic concepts requires a rich hypothesis space, but exploring such spaces is intractable. We describe how sampling algorithms can be brought to bear on this problem, leading to the prediction that humans will exhibit the same failure modes as sampling algorithms. In particular, we show that humans get stuck in “garden paths”—initially promising hypotheses that turn out to be sub-optimal in light of subsequent data. Susceptibility to garden paths is sensitive to the availability of cognitive resources. These phenomena are well-explained by a Bayesian model in which humans stochastically update a sample-based representation of the posterior over a compositional hypothesis space. Our model provides a framework for understanding “bounded rationality” in symbolic concept learning.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.