Abstract
This paper proposes a novel method named Real Label Partial Least Squares (RL-PLS) for the task of cross-modal retrieval. Pervious works just take the texts and images as two modalities in PLS. But in RL-PLS, considering that the class label is more related to the semantics directly, we take the class label as the assistant modality. Specially, we build two KPLS models and project both images and texts into the label space. Then, the similarity of images and texts can be measured more accurately in the label space. Furthermore, we do not restrict the label indicator values as the binary values as the traditional methods. By contraries, in RL-PLS, the label indicator values are set to the real values. Specially, the label indicator values are comprised by two parts: positive or negative represents the sample class while the absolute value represents the local structure in the class. By this way, the discriminate ability of RL-PLS is improved greatly. To show the effectiveness of RL-PLS, the experiments are conducted on two cross-modal retrieval tasks (Wiki and Pascal Voc2007), on which the competitive results are obtained.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.