Abstract

Learning the intrinsic low-dimensional subspace from high-dimensional data is a key step for many social systems of artificial intelligence. In practical scenarios, the observed data are usually corrupted by many types of noise, which brings a great challenge for social systems to analyze data. As a commonly utilized subspace learning technique, robust low-rank matrix factorization (LRMF) focuses on recovering the underlying subspaces in a noisy environment. However, most of the existing approaches simply assume that the noise contaminating the data is independent identically distributed (i.i.d.), such as Gaussian and Laplacian noises. This assumption, though greatly simplifies the underlying learning problem, may not hold for more complex non-i.i.d. noise widely existed in social systems. In this work, we suggest a robust LRMF approach to deal with various types of noise in a unified manner. Different from traditional algorithms, noise in our framework is modeled using an independent and piecewise identically distributed (i.p.i.d.) source, which employs a collection of distributions, instead of a single one to characterize the statistical behavior of the underlying noise. Assisted by the generic noise model, we then design a robust LRMF algorithm under the information-theoretic learning (ITL) framework through a new minimization criterion. By adopting the half-quadratic optimization paradigm, we further deliver an optimization strategy for our proposed method. Experimental results on both synthetic and real data are provided to demonstrate the superiority of our proposed scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call