Abstract

An important goal in studying both human intelligence and artificial intelligence is an understanding of how a natural or artificial learning system deals with the uncertainty and ambiguity in the real world. We suggest that the relevant aspects in a learning environment for the learner are only those that make contact with the learnerpsilas sensory system. Moreover, in a real-world interaction, what the learner perceives in his sensory system critically depends on both his own and his social partnerpsilas actions, and his interactions with the world. In this way, the perception-action loops both within a learner and between the learner and his social partners may provide an embodied solution that significantly simplifies the social and physical learning environment, and filters irrelevant information for a current learning task which ultimately leads to successful learning. In light of this, we report new findings using a novel method that seeks to describe the visual learning environment from a young childpsilas point of view. The method consists of a multi-camera sensing environment consisting of two head-mounted mini cameras that are placed on both the childpsilas and the parentpsilas foreheads respectively. The main results are that (1) the adultpsilas and childpsilas views are fundamentally different when they interact in the same environment; (2) what the child perceives most often depends on his own actions and his social partnerpsilas actions; (3) the actions generated by both social partners provide more constrained and clean input to facilitate learning. These findings have broad implications for how one studies and thinks about human and artificial learning systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call