Abstract

Low-rank representation (LRR), which constructs a robust low rank representation for data processing, has attracted much attention in the past decades. It is assumed that the data points lie on a low-dimensional subspace and the representation matrix of the data points is low-rank. In real applications, the collected data might be corrupted by noises or errors. The low-rank learning methods can recover the noisy data and capture the global information of the data. In this chapter, we introduce the low-rank learning methods and their extensions. The main idea, objective function, and optimization of these methods are introduced in detail. Finally, we make a numerical experiment on two publicly image databases to verify the performance of these methods. It is shown that the LRE and LRLE converge very fast and obtain competitive performances against other methods. However, how to further increase performances is still a challenging problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call