Abstract

Natural Language Processing involves computational processing, and understanding of human languages. With the increase in computation power, deep learning models are being used for various NLP tasks. Further availability of large datasets of various languages enables the training of deep learning models. Multiple processing layers are used by the deep learning methods for learning representations of data which are hierarchical in nature and which gives excellent results for different NLP tasks. This paper reviews the important models and methods in deep learning which are applied to natural language problems. In particular, Convolutional Neural Network, Recurrent Neural Network, Long Short-Term Memory, Gated Recurrent Unit, Recursive Neural Network have been described. Also, their advantages and suitability to various natural language processing applications such as text classification, sentiment analysis, etc. have been reviewed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call