Abstract

The dazzling success of neural networks over natural language processing systems is imposing an urgent need to control their behavior with simpler, more direct declarative rules. In this paper, we propose Pat-in-the-Loop as a model to control a specific class of syntax-oriented neural networks by adding declarative rules. In Pat-in-the-Loop, distributed tree encoders allow to exploit parse trees in neural networks, heat parse trees visualize activation of parse trees, and parse subtrees are used as declarative rules in the neural network. Hence, Pat-in-the-Loop is a model to include human control in specific natural language processing (NLP)-neural network (NN) systems that exploit syntactic information, which we will generically call Pat. A pilot study on question classification showed that declarative rules representing human knowledge, injected by Pat, can be effectively used in these neural networks to ensure correctness, relevance, and cost-effective.

Highlights

  • Neural networks are obtaining dazzling successes in natural language processing (NLP)

  • The key idea we propose in Pat-in-the-Loop model is using “heat parse trees” to analyze which parts of parse trees are responsible for the activation of specific neurons (Section 3.3); and, controlling the behavior of neural networks with declarative rules derived from the analysis of these heat parse trees (Section 3.4)

  • Pat has an important possibility of understanding why decisions are taken by a specific network and, s/he can define specific rules to control the behavior of the neural network

Read more

Summary

Introduction

Neural networks are obtaining dazzling successes in natural language processing (NLP). Understanding the core issue helps categorize different works into data, model and algorithm according to how they solve the core issue using prior knowledge: data augments the supervised experience, model constrains the hypothesis space to be smaller, and algorithm alters the search strategy for the best hypothesis in the given hypothesis space [14] This is exactly what we want to fight in favour of approaches that understanding neural networks and trying to control their behavior besides using training examples. When applied to NLP-NN systems [18] they are extremely difficult to interpret For this reason, human involvement with the right interfaces could expedite the efficient labeling of tricky or novel data that a machine can’t process, reducing the potential for data-related errors. Follow a description of Pat-in-the-Loop works (Section 3) and (Section 4), we show the improvements achieved by the proposed model

Related Work
The Model
Preliminary Notation
Distributed Tree Encoders for Exploiting Parse Trees in Neural Networks
Visualizing Activation of Parse Trees
Human-in-the-Loop Layer
Pilot Experiment
Experimental Set-Up
Results and Discussion
Conclusions and Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.