Abstract

The creation of language models is a vital object of natural language processing because it creates a possible distribution in a sequence of words and offers a unique representation of each occurrence of a word. Therefore, an intelligent language model can differentiate the subtle nuances of language and model its syntax and semantics. It also allows the acquisition of high-level representations. In addition, it serves as an initial model with helpful information, which can be transferred to various other word processing methods for a deeper and more complete understanding of the language. This work proposes an innovative Long Short-Term Memory (LSTM) neural network whose implementation is based on the use of an attention mechanism as a measure of alignment of the output with the input sequence and a corresponding coverage optimization mechanism. The optimization mechanism informs the model about the possible outcomes at each step which have already been produced in previous actions. The adaptation and evaluation of the system’s design parameters were tested in multidimensional and challenging data sets We were considering parametric and heuristic procedures for finding the optimal combination of hyper-parameters. Experimental results demonstrate the quality of the proposed system, revealing essential directions for the design of similar systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.