Abstract

The mixture‐of‐experts (MoE) paradigm attempts to learn complex models by combining several “experts” via probabilistic mixture models. Each expert in the MoE model handles a small area of the data space in which a gating function controls the data‐to‐expert assignment. The MoE framework has been used extensively in designing non‐linear models in machine learning and statistics to model the heterogeneity in data for the purpose of regression, classification and clustering. The existing MoE of multi‐target regression (MoE‐MTR) models for continuous data is based on multivariate normal distributions. However, in many practical situations, for a set of data, a group or groups of observations may exhibit asymmetric and heavy‐tailed behaviour, and inference based on symmetric distributions in such situations can unduly affect the fit of the regression model. We introduce here a novel robust multivariate non‐normal MoE model by the use of mean mixture of normal distributions. The proposed model can handle the issues of MoE‐MTR models regarding possibly skewed, heavy‐tailed and noisy data. Maximum likelihood estimates of model parameters are developed based on an expectation‐maximization (EM)‐type algorithm. Parsimony is also obtained by imposing suitable constraints on the expert dispersion matrices. The usefulness of the proposed methodology is illustrated using simulated and real data sets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.