Abstract

The subspace approximation problem with outliers, for given n points in d dimensions x1,x2,…,xn∈Rd, an integer 1≤k≤d, and an outlier parameter 0≤α≤1, is to find a k-dimensional linear subspace of Rd that minimizes the sum of squared distances to its nearest (1−α)n points. More generally, the ℓp subspace approximation problem with outliers minimizes the sum of p-th powers of distances instead of the sum of squared distances. Even the case of p=2 or robust PCA is non-trivial, and previous work requires additional assumptions on the input or generative models for it. Any multiplicative approximation algorithm for the subspace approximation problem with outliers must solve the robust subspace recovery problem, a special case in which the (1−α)n inliers in the optimal solution are promised to lie exactly on a k-dimensional linear subspace. However, robust subspace recovery is Small Set Expansion (SSE)-hard, and known algorithmic results for robust subspace recovery require strong assumptions on the input, e.g., any d outliers must be linearly independent.In this paper, we show how to extend dimension reduction techniques and bi-criteria approximations based on sampling and coresets to the problem of subspace approximation with outliers. To get around the SSE-hardness of robust subspace recovery, we assume that the squared distance error of the optimal k-dimensional subspace summed over the optimal (1−α)n inliers is at least δ times its squared-error summed over all n points, for some 0<δ≤1−α. Under this assumption, we give an efficient algorithm to find a weak coreset or a subset of poly(k/ϵ)log⁡(1/δ)log⁡log⁡(1/δ) points whose span contains a k-dimensional subspace that gives a multiplicative (1+ϵ)-approximation to the optimal solution. Our technique is based on the squared-length sampling algorithm suggested for low-rank approximation problems in the seminal work of Frieze, Kannan, and Vempala [12]. The running time of our algorithm is linear in n and d. Interestingly, our results hold even when the fraction of outliers α is large, as long as the obvious condition 0<δ≤1−α is satisfied. We show similar results for subspace approximation with ℓp error or more general M-estimator loss functions, and also give an additive approximation for the affine subspace approximation problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call