Abstract

Sparse signal models are used in many signal processing applications. The task of estimating the sparsest coefficient vector in these models is a combinatorial problem and efficient, often suboptimal strategies have to be used. Fortunately, under certain conditions on the model, several algorithms could be shown to efficiently calculate near-optimal solutions. In this paper, we study one of these methods, the so-called Iterative Hard Thresholding algorithm. While this method has strong theoretical performance guarantees whenever certain theoretical properties hold, empirical studies show that the algorithm's performance degrades significantly, whenever the conditions fail. What is more, in this regime, the algorithm also often fails to converge. As we are here interested in the application of the method to real world problems, in which it is not known in general, whether the theoretical conditions are satisfied or not, we suggest a simple modification that guarantees the convergence of the method, even in this regime. With this modification, empirical evidence suggests that the algorithm is faster than many other state-of-the-art approaches while showing similar performance. What is more, the modified algorithm retains theoretical performance guarantees similar to the original algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call