Abstract
Traditional methods for named entity recognition (NER) require heavy feature engineering to achieve high performance. We propose a novel neural network architecture for NER that detects word features automatically without feature engineering. Our approach uses word embedding as input, feeds them into a bidirectional long short-term memory (B-LSTM) for modeling the context within a sentence, and outputs the NER results. This study extends the neural network language model through B-LSTM, which outperforms other deep neural network models in NER tasks. Experimental results show that the B-LSTM with word embedding trained on a large corpus achieves the highest F-score of 0.9247, thus outperforming state-of-the-art methods that are based on feature engineering.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.