Abstract

Many regression tasks encounter an asymmetric distribution of information between training and testing phases where the additional information available in training, the so-called privileged information (PI), is often inaccessible in testing. In practice, the privileged information in training data might be expressed in different formats, such as continuous, ordinal, or binary values. However, most the existing learning using privileged information (LUPI) paradigms primarily deal with the continuous form of PI, preventing them from managing variational PI, which motivates this research. Therefore, in this paper, we propose a unified framework to systematically address the aforementioned three forms of privileged information. The proposed V-SVR+ method integrates continuous, ordinal, and binary PI into the learning process of support vector regression (SVR) via three losses. For continuous privileged information, we define a linear correcting (slack) function in the privileged information space to estimate slack variables in the standard SVR method using privileged information. For the ordinal relations of privileged information, we first rank the privileged information and then, regard this ordinal privileged information as auxiliary information used in the learning process of the SVR model. For the binary or Boolean privileged information, we infer a probabilistic dependency between the privileged information and labels from the summarized privileged information knowledge. Then, we transfer the privileged information knowledge to constraints and form a constrained optimization problem. We evaluate the proposed method in three applications: music emotion recognition from songs with the help of implicit information about music elements judged by composers; multiple object recognition from images with the help of implicit information about the object’s importance conveyed by the list of manually annotated image tags; and photo aesthetic assessment enhanced by high-level aesthetic attributes hidden in photos. Experiment results demonstrate that the proposed methods are superior to the classic learning paradigm when solving practical problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.