Abstract

Linear regression in the presence of outliers is an important problem and is challenging as the support of outliers is not known beforehand. Many robust estimators solve this problem via explicitly or implicitly assuming that outliers are sparse and result in large observation errors. We propose an algorithm for robust outlier support identification (AROSI) utilizing a novel objective function with $\ell _{0}$ -“norm” regularization which models the sparsity of outliers. The optimization procedure naturally utilizes the large observation error assumption of outliers and directly operates on the $\ell _{0}$ -“norm” and is guaranteed to converge. When only sparse outliers are present (no dense inlier noise), we show that, under certain model and algorithm parameter settings, AROSI can recover the solution exactly. In the case, where both dense inlier noise and sparse outliers are present, we prove that the estimation error is bounded. Extensive empirical comparisons with state-of-the-art methods demonstrate the advantage of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call