Abstract

Many computer vision problems involve exploring the synthesis and classification models that map images from the observed source space to a target space. Recently, one popular and effective method is to transform images from both source and target space into a shared single sparse domain, in which a synthesis model is established. Motivated by such a technique, this research attempts to explore an effective and robust linear function that maps the sparse representatio ns of images from the source space to the target space, and simultaneously develop a linear classifier on such a coupled space with both supervised and semi-supervised learning. In order to capture the sparse structure shared by each class, we represent this mapping using a linear transformation with the constraint of sparsity. The performance of our proposed method is evaluated on several benchmark image datasets for low-resolution faces/digits classification and super-resolution, and the experimental results verify the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call