Abstract

A stationary stochastic geometric model is proposed for analyzing the data compression method used in one-bit compressed sensing. The data set is an unconstrained stationary set, for instance all of $\mathbb{R} ^{n}$ or a stationary Poisson point process in $\mathbb{R} ^{n}$. It is compressed using a stationary and isotropic Poisson hyperplane tessellation, assumed independent of the data. That is, each data point is compressed using one bit with respect to each hyperplane, which is the side of the hyperplane it lies on. This model allows one to determine how the intensity of the hyperplanes must scale with the dimension $n$ to ensure sufficient separation of different data by the hyperplanes as well as sufficient proximity of the data compressed together. The results have direct implications in compressed sensing and in source coding.

Highlights

  • Introduction and motivationsOne-bit compressed sensing is a method of signal recovery from a sequence of measurements contained in {−1, 1}

  • As in one-bit compressed sensing, the quality can be determined by having a small error in signal recovery, which can be guaranteed if the collection of hyperplanes tessellates the signal space into cells small enough to ensure all signals within a single cell are close in Euclidean distance

  • While data in most directions will be separated from the typical data point, there is a set of directions of decreasing measure as dimension increases in which the compression will remain identical and where most of the volume of data compressed like the typical data point lies

Read more

Summary

Introduction and motivations

One-bit compressed sensing is a method of signal recovery from a sequence of measurements contained in {−1, 1}. While data in most directions will be separated from the typical data point, there is a set of directions of decreasing measure as dimension increases in which the compression will remain identical and where most of the volume of data compressed like the typical data point lies Considering this low distortion criterion, we show that, for α = 1, there is a threshold for ρ above which the expected value of the volume in question goes to zero and below which it approaches infinity. This last representation of data sparsity is very specific and the general question of one-bit compression based on stationary Poisson hyperplanes for sparse data is far from being solved by the observations on this specific case.

Preliminaries and notation
Poisson hyperplane tessellations
Zero cell
Typical cell
Palm distribution
Results
Separation of two different data
Volume of data compressed together
Farthest distance between two data points compressed together
Summary
Dimension reduction
One-bit compressed sensing comments
Channel coding
Loss-less one-bit compression source coding
Lossy one-bit compression source coding
Why isotropic Poisson hyperplanes
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.