Abstract

Neural-symbolic learning, aiming to combine the perceiving power of neural perception and the reasoning power of symbolic logic together, has drawn increasing research attention. However, existing works simply cascade the two components together and optimize them isolatedly, failing to utilize the mutual enhancing information between them. To address this problem, we propose DeepLogic, a framework with joint learning of neural perception and logical reasoning, such that these two components are jointly optimized through mutual supervision signals. In particular, the proposed DeepLogic framework contains a deep-logic module that is capable of representing complex first-order-logic formulas in a tree structure with basic logic operators. We then theoretically quantify the mutual supervision signals and propose the deep&logic optimization algorithm for joint optimization. We further prove the convergence of DeepLogic and conduct extensive experiments on model performance, convergence, and generalization, as well as its extension to the continuous domain. The experimental results show that through jointly learning both perceptual ability and logic formulas in a weakly supervised manner, our proposed DeepLogic framework can significantly outperform DNN-based baselines by a great margin and beat other strong baselines without out-of-box tools.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.