Abstract
Learning in natural environments is often characterized by a degree of inconsistency from an input. These inconsistencies occur, for example, when learning from more than one source, or when the presence of environmental noise distorts incoming information; as a result, the task faced by the learner becomes ambiguous. In this study, we investigate how learners handle such situations. We focus on the setting where a learner receives and processes a sequence of utterances to master associations between objects and their labels, where the source is inconsistent by design: It uses both "correct" and "incorrect" object-label pairings. We hypothesize that depending on the order of presentation, the result of the learning may be different. To this end, we consider two types of symbolic learning procedures: the Object-Label (OL) and the Label-Object (LO) process. In the OL process, the learner is first exposed to the object, and then the label. In the LO process, this order is reversed. We perform experiments with human subjects, and also construct a computational model that is based on a nonlinear stochastic reinforcement learning algorithm. It is observed experimentally that OL learners are generally better at processing inconsistent input compared to LO learners. We show that the patterns observed in the learning experiments can be reproduced in the simulations if the model includes (a) an ability to regularize the input (and also to do the opposite, i.e., undermatch) and (b) an ability to take account of implicit negative evidence (i.e., interactions among different objects/labels). The model suggests that while both types of learners utilize implicit negative evidence in a similar way, there is a difference in regularization patterns: OL learners regularize the input, whereas LO learners undermatch. As a result, OL learners are able to form a more consistent system of image-utterance associations, despite the ambiguous learning task.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.