Abstract

During the past decade, compressed sensing has delivered significant advances in the theory and application of measuring and compressing data. Consider capturing a 10-megapixel image with a digital camera. Emailing an image of this size requires an unnecessary amount of storage space and bandwidth. Instead, users employ a standard digital compression scheme, such as JPEG, to represent the image as a 64-kb file. The compressed image is completely recognizable even though the dimension of the compressed version is a tiny fraction of the original 10 million dimensions. Compressed sensing takes this mathematical phenomenon one step further. Is it possible to capture the pertinent information, such as the 64-kb image, without first measuring the full 10 million pixel values? If so, how should we perform the measurements? If we capture the important information, can we still reconstruct the image from this limited number of observations? Compressed sensing exploded in 2004 when Donoho (1, 2) and Candes and Tao (3) definitively answered these questions by incorporating randomness in the measurement process. Because engineering a truly random process is impossible, a major open problem in compressed sensing is the search for deterministic methods for sparse signal measurement that capture the relevant information in the signal and permit accurate reconstruction. In PNAS, Monajemi et al. (4) provide a major step forward in understanding the potential for deterministic measurement matrices in compressed sensing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call