Abstract

Intent detection and slot filling are two fundamental and important tasks in spoken language understanding (SLU). Motivated by the fact that the intent and slots in a user utterance have a strong relationship, joint models that deal with both tasks in a single framework have become a predominant choice in SLU research. Most existing joint models build two different decoders on top of a shared weight encoder or exploit intent information to detect slots. Some joint models transfer information between two tasks implicitly. In this paper, we propose a bidirectional joint model for SLU that explicitly incorporates intent information into slot filling and slot information into intent detection. Specifically, we first predict a soft intent signal, which is fed into a biaffine classifier to recognize slots. Slot features are then employed along with the utterance representation to predict the final intent. We also introduce a loss function that takes into account three types of losses: soft intent detection, final intent detection, and slot filling. Experimental results on three benchmark datasets ATIS, Snips, and PhoATIS show that our model outperforms previous state-of-the-art models in both tasks with relative error reductions ranging from 6% to 22%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.