Abstract

There is emerging interest in performing regression between distributions. In contrast to prediction on single instances, these machine learning methods can be useful for population-based studies or on problems that are inherently statistical in nature. The recently proposed distribution regression network (DRN) [13] has shown superior performance for the distribution-to-distribution regression task compared to conventional neural networks. However, in Kou et al. [13] and some other works on distribution regression, there is a lack of comprehensive comparative study on both the theoretical basis and generalization abilities of the methods. We derive some mathematical properties of DRN and compare it to conventional neural networks. We also perform comprehensive experiments to study the generalizability of distribution regression models, by studying their performance with limited training data, data sampling noise and varying task difficulty. DRN consistently outperforms conventional neural networks, requiring fewer training data and maintaining strong performance with noise. Furthermore, the theoretical properties of DRN can be used to provide some explanations on the ability of DRN to achieve better generalization performance than conventional neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call