Abstract

Dialogue analysis contains Dialogue Act (DA) analysis and Adjacent Pair Dependency (APD) analysis. DA represents a function of a speaker's utterance, while APD represents the relation of the adjacent utterances. DA analysis is a difficult point for further exploration of dialogue understanding, and the context information provides the extra information for DA which is very important. APD shows the relation between adjacent pairs, as a result it can provide the corresponding precondition and background contextual information for the dialogue analysis. However, previous studies ignored the relation between DA and APD analysis, and there is no well-designed dialogue model for this issue. Based on a Chinese daily conversation annotation corpus, we propose a multi-task learning framework. In this method, DA task is responsible for extracting content information from one utterance and classify its function, while APD task is responsible for classifying relation between adjacent utterances. By this way, APD can provide DA with additional context information, and DA has the ability to assist in training APD parameters during this process. Our experimental results demonstrate that multi-task learning model can improve the accuracy of the DA and APD tasks while compared with the state-of-art of DA and APD analysis models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call