Abstract

The visual semantic embedding (VSE) aims to construct a joint embedding space between visual features and semantic information, whereby classes can be well retrieved for a given image. However, VSE faces the computational challenge due to the large scale image-class data and the constrained system processing power. To speed up model training, many researchers resort to different sampling strategies by involving only a small portion of the classes at each training step. However, these methods are greatly biased especially when the sampling distribution deviates from the true data distribution. In order to retain VSE models fidelity, we adopt the regular full-sample in our algorithm. We also devise two separate optimization strategies to reduce time complexity, and derive more effective updating rules. The experimental results on four real datasets demonstrate that our approach not only converges much faster than the state-of-the-art sampling models, but also generates more accurate class retrieval.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call