Abstract
In recent years, exchangeable network structures that take datasets as input have been widely used to obtain representations of various datasets. Although they perform well, analyzing exchangeable network with a dataset as input is challenging. Given that this type of network can be viewed as a function acting on probability measures since datasets are drawn from various distributions, this paper theoretically analyzes exchangeable network structures from a probabilistic perspective. This paper proposes a probabilistic analytical framework that neural networks acting on probability measures, which is an extension of Multi-Layer Perceptrons (MLP). When only samples from each distribution are available, in this new analytical framework, neural networks acting on probability measures correspond to the traditional exchangeable structure defined on datasets. Using this new analytical framework, we can demonstrate the ability of exchangeable network structures to capture complex patterns, as it provides the universal approximation property of exchangeable network structures. Furthermore, we derive a consistency result that shows the parameter estimation of exchangeable network structures is consistent statistically under certain conditions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.