Abstract

In many real world settings the data to analyze is heterogeneous consisting of (say) images, text and video. An elegant approach when dealing with such data is to project all the data to a common space so standard learning methods can be used. However, typical projection methods make strong assumptions such as the multi-view assumption (datum in one data set are always associated with a single datum in the other view) or that the multiple data sets have an overlapping feature space. Such strong assumptions limit what data such work can be applied to. We present a framework for projecting heterogeneous data from multiple data sets into a common lower dimensional space using a rich range of guidance which does not assume any overlap between the instances or features in different data sets. Our work can specify inter-dataset (between instances in different data sets) guidance and intra-dataset (between instances in the same data set) guidance, both of which can be positively or negatively weighted. We show our work offers substantially more flexibility over related methods such as Canonical Correlation Analysis (CCA) and Locality Preserving Projections (LPP) and illustrate its superior performance for supervised and unsupervised learning problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.