Abstract

Incremental parsing gains its importance in natural language processing and psycholinguistics because of its cognitive plausibility. Modeling the associated cognitive data structures, and their dynamics, can lead to a better understanding of the human parser. In earlier work, we have introduced a recursive neural network (RNN) capable of performing syntactic ambiguity resolution in incremental parsing. In this paper, we report a systematic analysis of the behavior of the network that allows us to gain important insights about the kind of information that is exploited to resolve different forms of ambiguity. In attachment ambiguities, in which a new phrase can be attached at more than one point in the syntactic left context, we found that learning from examples allows us to predict the location of the attachment point with high accuracy, while the discrimination amongst alternative syntactic structures with the same attachment point is slightly better than making a decision purely based on frequencies. We also introduce several new ideas to enhance the architectural design, obtaining significant improvements of prediction accuracy, up to 25% error reduction on the same dataset used in previous work. Finally, we report large scale experiments on the entire Wall Street Journal section of the Penn Treebank. The best prediction accuracy of the model on this large dataset is 87.6%, a relative error reduction larger than 50% compared to previous results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.