Abstract

Finding sparse representations of signals is an important problem in many application domains. Unfortunately, when the signal dictionary is overcomplete, finding the sparsest representation is NP-hard without some prior knowledge of the solution. However, suppose that we have access to such information. Is it possible to demonstrate any performance bounds in this restricted setting? We examine this question with respect to algorithms that minimize general /spl lscr//sub p/-norm-like diversity measures. Using randomized dictionaries, we analyze performance probabilistically under two conditions. First, when 0/spl les/p/spl les/1, we quantify (almost surely) the number and quality of every local minimum. Next, for the p=1 case, we extend the deterministic results of D.L. Donoho and M. Elad (see Proc. Nat. Acad. Sci., vol.100, no.5, 2003) by deriving explicit confidence intervals for a theoretical equivalence bound, under which the minimum /spl lscr//sub 1/-norm solution is guaranteed to equal the maximally sparse solution. These results elucidate our previous empirical studies applying /spl lscr//sub p/ measures to basis selection tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call