Abstract

AbstractWe explore the probabilistic partition of unity network (PPOU‐Net) model in the context of high‐dimensional regression problems and propose a general framework focusing on adaptive dimensionality reduction. With the proposed framework, the target function is approximated by a mixture of experts model on a low‐dimensional manifold, where each cluster is associated with a fixed‐degree polynomial. We present a training strategy that leverages the expectation maximization (EM) algorithm. During the training, we alternate between (i) applying gradient descent to update the DNN coefficients; and (ii) using closed‐form formulae derived from the EM algorithm to update the mixture of experts model parameters. Under the probabilistic formulation, step (ii) admits the form of embarrassingly paralleliazable weighted least‐squares solves. The PPOU‐Nets consistently outperform the baseline fully‐connected neural networks of comparable sizes in numerical experiments of various data dimensions. We also explore the proposed model in applications of quantum computing, where the PPOU‐Nets act as surrogate models for cost landscapes associated with variational quantum circuits.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call