Abstract
Guaranteeing the monotonicity of a learned model is crucial to address concerns such as fairness, interpretability, and generalization. This paper develops a new monotonic neural network named Deep Isotonic Embedding Network (DIEN), which uses different modules to deal with monotonic and non-monotonic features respectively, and then combine outputs of these modules linearly to obtain the prediction result. A new embedding tool called Isotonic Embedding Unit is developed to process monotonic features and turn each one into an isotonic embedding vector. By converting non-monotonic features into a series of non-negative weight vectors and then combining them with isotonic embedding vectors that have special properties, we enable DIEN to guarantee monotonicity. Besides, we also introduce a module named Monotonic Feature Learning Network to capture complex dependencies between monotonic features. This module is a monotonic feedforward neural network with non-negative weights and can handle scenarios where there are few non-monotonic features or only monotonic features. In comparison to existing methods, DIEN does not require intricate structures like lattices or the use of additional verification techniques to ensure monotonicity. Additionally, the relationship between DIEN’s inputs and outputs is obvious and intuitive. Results from experiments on both synthetic and real-world datasets demonstrate DIEN’s superiority over existing methodologies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.