Abstract

Multi-view problems can benefit from latent representations since they find low-dimensional projections that fairly capture the correlations among the multiple views that characterise the data. On the other hand, high-dimensionality and non-linear issues are traditionally handled by kernel methods, inducing a (non)-linear function between the latent projection and the data itself. However, they usually come with exposition to overfitting. Here, we combine Bayesian factor analysis with what we refer to as kernelized observations, in which the proposed model focuses on reconstructing not the data itself, but its relationship with other data points measured by a kernel function. In turn, we extend previous Bayesian FA formulations to be able to model non-linear data relationships by means of kernelized data representations and, at the same time, include additional facilities to obtain compact kernel representations by means of an automatic selection of Bayesian Relevance Vectors (RVs), feature relevance analysis and, even, obtain an automatic multiple kernel learning approach. Besides, this is flexibly included into a modular framework where we can easily adapt the model capabilities to the data needs and, even, combine them with previous FA functionalities such as heterogeneous data representations or semi-supervised learning. Using several public databases, we demonstrate the potential of the approach (and its extensions) w.r.t. common multi-view learning models such as kernel canonical correlation analysis, heterogeneous incomplete – variational autoenconder or manifold relevance determination, where the proposed model shows its ability to outperform the baselines while indistinctly combining model extensions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call