Abstract

In this paper, we describe Pixel Sampling Clustering Technique (PSCT), a data-driven sampling procedure used to reduce pixel sets. We view the pixels in an image as a high redundancy 3D space. We also refer to this space as our color model. Our method aims to retain a relevant sample of the data so it can act as a new smaller, hence more efficient, color model. PSCT uses a pair of fast density-based clustering algorithms in tandem. First, it applies Birch and then DBSCAN to keep the most densely represented colors. We cluster the resulting color model and use the labels to segment images. We also complement the sampling method with a refinement algorithm intended to improve color representation. In our paper, we show how to reconstruct images using our reduced color model. We also show that reconstructed images have enough information to perform image related learning tasks with almost the same accuracy than the original images but with only a small fraction of the data. We test our sampling method in three image related supervised and unsupervised tasks and compare them with state-of-the-art methods. For our experiments, we use two image datasets: MIT’s Vision Texture Dataset and Berkeley’s BSD500.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.