Abstract

AbstractThe capacity of relational interaction between high‐level information and reason is the defining characteristic of human intelligence. Regardless of the remarkable progress in artificial intelligence, recent machine reading comprehension models still heavily rely on high‐dimensional word‐based distributed representations. Since these models employ statistical means to answer questions of complex textual corpus and employ an accuracy‐based metric system, their learning capacity of the required skills is not guaranteed. To ensure the capacity of MRC models to learn the desired skills, explainability has become an emerging requirement. In this paper, we propose an end‐to‐end natural language reasoning model that is based on sets of high‐level aggregated representations which promote operational explainability. To this end, sequential multi‐head attention, and a loss regularization function is proposed. We show analysis of the proposed approach on two natural language reasoning oriented question and answering datasets (bAbI and NewsQA).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.