Abstract

Big data can be a blessing: with very large training data sets it becomes possible to perform complex learning tasks with unprecedented accuracy. Yet, this improved performance comes at the price of enormous computational challenges. Thus, one may wonder: Is it possible to leverage the information content of huge data sets while keeping computational resources under control? Can this also help solve some of the privacy issues raised by large-scale learning? This is the ambition of compressive learning, where the data set is massively compressed before learning. Here, a "sketch" is first constructed by computing carefully chosen nonlinear random features [e.g., random Fourier (RF) features] and averaging them over the whole data set. Parameters are then learned from the sketch, without access to the original data set. This article surveys the current state of the art in compressive learning, including the main concepts and algorithms, their connections with established signal processing methods, existing theoretical guarantees on both information preservation and privacy preservation, and important open problems. For an extended version of this article that contains additional references and more in-depth discussions on a variety of topics, see [1].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call