Abstract

In this work, we investigate the lifelong learning problem from the viewpoint of heterogeneous multi-modal fusion. The main challenges come from the fact that the common representation between heterogeneous modalities should be persistently learned and the learned classifier for each multi-modal task should be persistently updated. To address this problem, we construct a multi-modal lifelong learning framework which deals with the consecutive multi-modal learning tasks and develop an efficient online dictionary learning algorithm to solve the multi-modal lifelong learning problem. Finally, we perform experimental validation on a complicated material recognition task and show the promising results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call