Abstract

In recent years, exchangeable network structures that take datasets as input have been widely used to obtain representations of various datasets. Although they perform well, analyzing exchangeable network with a dataset as input is challenging. Given that this type of network can be viewed as a function acting on probability measures since datasets are drawn from various distributions, this paper theoretically analyzes exchangeable network structures from a probabilistic perspective. This paper proposes a probabilistic analytical framework that neural networks acting on probability measures, which is an extension of Multi-Layer Perceptrons (MLP). When only samples from each distribution are available, in this new analytical framework, neural networks acting on probability measures correspond to the traditional exchangeable structure defined on datasets. Using this new analytical framework, we can demonstrate the ability of exchangeable network structures to capture complex patterns, as it provides the universal approximation property of exchangeable network structures. Furthermore, we derive a consistency result that shows the parameter estimation of exchangeable network structures is consistent statistically under certain conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call