Abstract
In this work, we propose an adversarial learning method for reward estimation in reinforcement learning (RL) based task-oriented dialog models. Most of the current RL based task-oriented dialog systems require the access to a reward signal from either user feedback or user ratings. Such user ratings, however, may not always be consistent or available in practice. Furthermore, online dialog policy learning with RL typically requires a large number of queries to users, suffering from sample efficiency problem. To address these challenges, we propose an adversarial learning method to learn dialog rewards directly from dialog samples. Such rewards are further used to optimize the dialog policy with policy gradient based RL. In the evaluation in a restaurant search domain, we show that the proposed adversarial dialog learning method achieves advanced dialog success rate comparing to strong baseline methods. We further discuss the covariate shift problem in online adversarial dialog learning and show how we can address that with partial access to user feedback.
Highlights
Task-oriented dialog systems are designed to assist user in completing daily tasks, such as making reservations and providing customer support
We discuss the potential issue of covariate shift during interactive adversarial learning and show how we address that with partial access to user feedback
We investigate the effectiveness of applying adversarial training in learning taskoriented dialog models
Summary
Task-oriented dialog systems are designed to assist user in completing daily tasks, such as making reservations and providing customer support. Comparing to chit-chat systems that are usually modeled with single-turn context-response pairs (Li et al, 2016; Serban et al, 2016), taskoriented dialog systems (Young et al, 2013; Williams et al, 2017) involve retrieving information from external resources and reasoning over multiple dialog turns. This makes it especially important for a system to be able to learn interactively from users. Online dialog policy learning with RL usually suffers from sample efficiency issue (Su et al, 2017), which requires an agent to make a large number of feedback queries to users
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.