Abstract

In the past few years, the use of transformer-based models has experienced increasing popularity as new state-of-the-art performance was achieved in several natural language processing tasks. As these models are often extremely large, however, their use for applications within embedded devices may not be feasible. In this work, we look at one such specific application, retrieval-based dialogue systems, that poses additional difficulties when deployed in environments characterized by limited resources. Research on building dialogue systems able to engage in natural sounding conversation with humans has attracted increasing attention in recent years. This has led to the rise of commercial conversational agents, such as Google Home, Alexa and Siri situated on embedded devices, that enable users to interface with a wide range of underlying functionalities in a natural and seamless manner. In part due to memory and computational power constraints, these agents necessitate frequent communication with a server in order to process the users’ queries. This communication may act as a bottleneck, resulting in delays as well as in the halt of the system should the network connection be lost or unavailable. We propose a new framework for hardware-aware retrieval-based dialogue systems based on the Dual-Encoder architecture, coupled with a clustering method to group candidates pertaining to a same conversation, that reduces storage capacity and computational power requirements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call