The efficacy of neural networks is widely recognized across a multitude of machine learning tasks, yet their black-box nature impedes the understanding of their decision-making processes. Such lack of explainability limits their use in high-stake fields such as medicine and finance, where transparent decision-making is essential. In contrast, traditional rule-based models offer clear input-output mappings, but often lag in performance when compared to their neural network counterparts. To address this challenge, this study introduces Fuzzy Neural Logic Reasoning (FNLR), a novel architecture that combines the best of both rule-based and deep learning models to achieve performance, interpretability, and noise robustness simultaneously. At its core, FNLR employs a “Symbolic Pre-training + Neural Fine-tuning” paradigm. Initially, the model adapts a pre-fitted binary decision tree. It then performs a “neuralization” process, replacing each node of the tree with a corresponding neural network equivalent. This transformation is facilitated through three shallow MLP modules, which are trained to emulate the relational operators intrinsic to decision trees. The model architecture is also extensible, allowing it to further boost expressiveness. Furthermore, FNLR incorporates fuzzy logic by proposing novel fuzzy relational operators, accounting for satisfaction degrees of propositions and thus eliminating rigid decision boundaries. This approach enhances model flexibility, enabling all paths of the decision tree to contribute to the target prediction in a weighted manner. Empirical evaluations on tabular datasets from various domains demonstrate that FNLR performs comparably to, or better than, state-of-the-art deep learning models designed for tabular data, while also exhibiting strong robustness to noise.
Read full abstract