Abstract

Intent detection (ID) and slot filling (SF) are important components in spoken language understanding (SLU) of a dialogue system. The most widely used method is pipeline manner which detects the user’s intent at first, then labels the slots. For the purpose of addressing error propagate, some researchers combine these two tasks together by ID and SF joint model. However, the joint models usually perform well only on one of these tasks due to the different values of the trade-off parameter. We therefore propose an encoder-decoder model with a new tag scheme which unifies these two tasks into one sequence labeling task. In our model, the process of slot filling can receive an intent information and the performance about multiple tags of a word has been improved. Moreover, we show a length-variable attention which can selectively look at a subset of source sentence in the sequence labeling model. Experimental results on two datasets display that the proposed model with length-variable attention outperforms over other joint models. Besides, our method will automatically find the balance between two tasks and achieve better overall performances.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.