Abstract

This paper proposes a unified approach to learning in environments in which patterns can be represented in variable-dimension domains, which nicely includes the case in which there are missing features. The proposal is based on the representation of the environment by pointwise constraints that are shown to model naturally pattern relationships that come out in problems of information retrieval, computer vision, and related fields. The given interpretation of learning leads to capturing the truly different aspects of similarity coming from the content at different dimensions and the pattern links. It turns out that functions that process real-valued features and functions that operate on symbolic entities are learned within a unified framework of regularization that can also be expressed using the kernel machines mathematical and algorithmic apparatus. Interestingly, in the extreme cases in which only the content or only the links are available, our theory returns classic kernel machines or graph regularization, respectively. We show experimental results that provide clear evidence of the remarkable improvements that are obtained when both types of similarities are exploited on artificial and real-world benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call