Abstract

Deep neural networks have revolutionized many fields, but their black-box nature also occasionally prevents their wider adoption in fields such as healthcare and finance where interpretable and explainable models are required. The recent development of Neural Additive Models (NAMs) poses a major step in the direction of interpretable deep learning for tabular datasets. In this paper, we propose a new subclass of NAMs that utilize a single-layer neural network construction of the Gaussian process via random Fourier features, which we call Gaussian Process Neural Additive Models (GP-NAM). GP-NAMs have the advantage of a convex objective function and number of trainable parameters that grows linearly with feature dimensions. It suffers no loss in performance compared with deeper NAM approaches because GPs are well-suited to learning complex non-parametric univariate functions. We demonstrate the performance of GP-NAM on several tabular datasets, showing that it achieves comparable performance in both classification and regression tasks with a massive reduction in the number of parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call