Abstract
Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind–morphological, electrophysiological, -omics and neuroimaging–, thereby broadening the scope–molecular, cellular, structural, functional, cognitive and medical– of the brain aspects to be studied.
Highlights
The quantitative part of a Bayesian network (BN) is a collection of conditional probability tables, each attached to a node, expressing the probability of the variable at the node conditioned on its parents in the network
Many stochastic simulation techniques are based on Monte Carlo methods, where we use the network to generate a large number of cases from the joint probability distribution (JPD), and the probability is estimated by counting observed frequencies in the samples
If the number N of observations is small, the decision about the class label is usually made by averaging the results provided by several classification models
Summary
Bayesian network (BN) (Pearl, 1988; Koller and Friedman, 2009) is a compact representation of a probability distribution over a set of discrete variables. The joint probability distribution (JPD) over all variables is computed as the product of all these conditional probabilities dictated by the arcs. Machine learning algorithms are distinguished by the target outcome or the type of available input data. They have several aims: association discovery, supervised classification and clustering. In association discovery (reviewed in Daly et al, 2011), we look for relationships among the variables of interest when we have access to data on those variables Examples of this modeling task in neuroscience include functional connectivity analysis with fMRI or the discovery of relationships among morphological variables in dendritic trees. In supervised classification (reviewed in Bielza and Larrañaga, 2014) there is a discrete class (or outcome) variable that guides the learning process and which has to be predicted for new data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.