Abstract

Recently there has been a lot of focus on developing deep learning models for symbolic reasoning tasks. One such task involves solving combinatorial problems, which can be viewed as instances of a constraint satisfaction problem, albeit with unknown constraints. The task of the neural model is then to discover the unknown constraints using the training data consisting of many solved instances. There are broadly two approaches for learning the unknown constraints: the first approach creates a purely neural model that maps an input puzzle directly to its solution, thereby representing the constraints implicitly in the model’s weights [1, 2, 3], and the second approach invokes a symbolic reasoner, such as an Integer Linear Program (ILP) solver, learning the constraints explicitly in the solver’s language, e.g., linear inequalities for an ILP solver [4, 5]. In this chapter, we discuss about both the implicit and explicit approaches. In the implicit approach, we review three neural architectures for solving combinatorial problems, viz., Neural Logic Machines (NLM) [2], Recurrent Relational Networks (RRN) [1], and its extensions proposed by Nandwani et al. [3] to tackle output space invariance (solving 16×16 sudoku after training on 9×9 sudokus). Under the second approach having explicit representation of constraints, we present two methods (CombOptNet [5] and ILP–Loss[4]) for end-to-end training of a Neural-ILP architecture: a neuro–symbolic model where a neural perception layer is followed by an ILP layer to represent reasoning. CombOptNet proposed by Paulus et al. [5] trains slowly owing to a call to an ILP solver in each learning iteration. On the other hand, ILP–Loss proposed by Nandwani et al. [4] is solver–free during training and thus much more scalable. In the end, we present specific experiments from [3] and [4] that compare the different methods discussed in this chapter.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.