Abstract

Random linear mappings play a large role in modern signal processing and machine learning. For example, multiplication by a Gaussian matrix can preserve the geometry of a set while reducing the dimension.Non-gaussian random mappings are attractive in practice for several reasons, including improved computational cost. On the other hand, mappings of interest often have heavier tails than Gaussian, which can lead to worse performance, i.e., less accurate preservation of the geometry of the set. In the sub-gaussian case, the size of the tail is measured with the sub-gaussian parameter, but the dependency has not been fully understood yet.We present the optimal tail dependency on the sub-gaussian parameter and prove it through a new version of Bernstein's in-equality. We also illustrate popular applications whose theoretical guarantees can be improved by our results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call