Abstract

Robust Interpretive Parsing is a method of learning hidden structure with error-driven learning algorithms (Tesar and Smolensky 1998, 2000). When an algorithm makes an error while learning a word, it calculates what it considers to be the structural representation of the word (the “target parse”) and changes its grammar accordingly. Among potential directions of grammar change, it chooses the direction that best satisfies the current constraint ranking. However, this choice is problematic because the current constraint ranking is guaranteed to be erroneous: had it not been erroneous, an error would not have occurred in the first place. While this problem has been recognized conceptually by Jarosz (2013), there have not been demonstrations of the problem arising in actual learning simulations. I present evidence from new learning simulations that the choice of target parse based on constraint ranking can indeed lead to learning failure. I then suggest an alternative method of choosing the target parse. This method opts for the target which involves the least amount of rerankings to accommodate for. In other words, it rewards economical change. While this alternative method does not lead to drastic improvement of performance, it does result in more efficient convergence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call