Abstract

Intent classification (IC) and Named Entity Recognition (NER) are arguably the two main components needed to build a Natural Language Understanding (NLU) engine, which is a main component of conversational agents. The IC and NER components are closely intertwined and the entities are often connected to the underlying intent. Current research has primarily focused to model IC and NER as two separate units, which results in error propagation, and thus, sub-optimal performance. In this paper, we propose a simple yet effective novel framework for NLU where the parameters of the IC and the NER models are jointly trained in a consolidated parameter space. Text semantic representations are obtained from popular pre-trained contextual language models, which are fine-tuned for our task, and these parameters are propagated to other deep neural layers in our framework leading to a faithful unified modelling of the IC and NER parameters. The overall framework results in a faithful parameter sharing when the training is underway, leading to a more coherent learning. Experiments on two public datasets, ATIS and SNIPS, show that our model outperforms other methods by a noticeable margin. On the SNIPS dataset, we obtain a 1.42% improvement in NER in terms of the F1 score, and 1% improvement in intent accuracy score. On ATIS, we achieve 1.54% improvement in intent accuracy score. We also present qualitative results to showcase the effectiveness of our model.

Highlights

  • A s conversational agents become more popular, it is vital to make them more effective

  • We propose a simple yet effective novel framework for Natural Language Understanding (NLU) where the parameters of the Intent Classification (IC) and the Named Entity Recognition (NER) models are jointly trained in a consolidated parameter space

  • Text semantic representations are obtained from popular pre-trained contextual language models, which are fine-tuned for our task, and these parameters are propagated to other deep neural layers in our framework leading to a faithful unified modelling of the IC and NER parameters

Read more

Summary

Introduction

A s conversational agents become more popular, it is vital to make them more effective The performance of such agents predominantly relies on their ability to understand what the user says, through the use of a Natural Language Understanding (NLU) engine, so the agent can act in a meaningful way. An NLU engine aims to form a semantic frame that captures the meaning of user utterances or information needs [1]. To this end, each NLU engine performs two main tasks, namely, Intent Classification (IC) and Named Entity Recognition (NER) [2]. The named entity “device” is annotated using the Beginning Inside and Outside (BIO) of a text segment notation

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.