Abstract
Contemporary conversational chatbots are user-friendly and possess the capabilities to simulate human conversations. However, they cannot evaluate large comprehensive datasets to provide an answer to the user. In contrast, the state-of-the-art Question Answering Model (QAM) trained on a large dataset can answer questions in the given context, and sometimes without context. This research designed a QAM to improve the customer's experience while using a chatbot for reading comprehension tasks using the BERT model and Google Dialogflow. QAM analyses and provides an accurate response using a comprehensive dataset and simulates human-like conversation. BERT model is used to predict accurate answers using the reading comprehension Conversational Question Answering (CoQA) dataset and Google Dialogflow to simulate human-like interactions. QAM extends the conventional way of using Google Dialogflow. A user-friendly Question Answering Model (QAM) reaps the benefits of Google Dialogflow and BERT Model integration. The BERT model and the chatbot interacts with each other using webhook and API. When a user interacts with the Dialogflow chatbot, it matches intents and sends the request to the BERT model. Finally, the BERT model provides an answer to the chatbot and respond to the end-user. The QAM provides accurate responses to end-users for questions based on large datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.