Abstract
Dimensionality reduction (DR) is an important and essential preprocessing step in machine learning, possibly using discriminative information, neighbor information or correlation information and resulting in different DR methods. In this work, we design novel DR methods that employ another form of information, i.e., the maximal contradiction on Universum data which belong to the same domain as the task at hand, but do not belong to any class of the training data. It has been found that classification and clustering algorithms achieve favorable improvements with the help of Universum data and such learning methods are referred as to Univesum learning. Two new dimensionality reduction methods are proposed, termed as Improved CCA (ICCA) and Improved DCCA (IDCCA) respectively, that can simultaneously exploit the training data and the Universum data. Both of them can be expressed by generalized eigenvalue problem and solved by eigenvalue computation. The experiments on both synthetic and real-world datasets are presented to show that the proposed DR methods can obtain better performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.