Abstract

The ability to read, reason, and infer lies at the heart of neural reasoning architectures. After all, the ability to perform logical reasoning over language remains a coveted goal of Artificial Intelligence. To this end, models such as the Turing-complete differentiable neural computer (DNC) boast of real logical reasoning capabilities, along with the ability to reason beyond simple surface-level matching. In this brief, we propose the first probe into DNC's logical reasoning capabilities with a focus on text-based question answering (QA). More concretely, we propose a conceptually simple but effective adversarial attack based on metamorphic relations. Our proposed adversarial attack reduces DNCs' state-of-the-art accuracy from 100% to 1.5% in the worst case, exposing weaknesses and susceptibilities in modern neural reasoning architectures. We further empirically explore possibilities to defend against such attacks and demonstrate the utility of our adversarial framework as a simple scalable method to improve model adversarial robustness.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.