Abstract
This paper describes a new exponential language model that decomposes the model parameters into one or more low-rank matrices that learn regularities in the training data and one or more sparse matrices that learn exceptions (e.g., keywords). The low-rank matrices induce continuous-space representations of words and histories. The sparse matrices learn multiword lexical items and topic/domain idiosyncrasies. This model generalizes the standard l1-regularized exponential language model, and has an efficient accelerated first-order training algorithm. Language modeling experiments show that the approach is useful in scenarios with limited training data, including low resource languages and domain adaptation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE/ACM Transactions on Audio, Speech, and Language Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.