Abstract

The transform in image coding aims to remove redundancy among data coefficients so that they can be independently coded, and to capture most of the image information in few coefficients. While the second goal ensures that discarding coefficients will not lead to large errors, the first goal ensures that simple (point-wise) coding schemes can be applied to the retained coefficients with optimal results. Principal Component Analysis (PCA) provides the best independence and data compaction for Gaussian sources. Yet, non-linear generalizations of PCA may provide better performance for more realistic non-Gaussian sources. Principal Polynomial Analysis (PPA) generalizes PCA by removing the non-linear relations among components using regression, and was analytically proved to perform better than PCA in dimensionality reduction. We explore here the suitability of reversible PPA for lossless compression of hyperspectral images. We found that reversible PPA performs worse than PCA due to the high impact of the rounding operation errors and to the amount of side information. We then propose two generalizations: Backwards PPA, where polynomial estimations are performed in reverse order, and Double-Sided PPA, where more than a single dimension is used in the predictions. Both yield better coding performance than canonical PPA and are comparable to PCA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call