Abstract

Self-paced learning (SPL) is a recently proposed methodology designed by mimicking through the learning principle of humans/animals. A variety of SPL realization schemes have been designed for different computer vision and pattern recognition tasks, and empirically demonstrated to be effective in these applications. However, the literature is in lack of the theoretical understanding of SPL. Regarding this research gap, this study attempts to provide some new theoretical understanding of the SPL scheme. Specifically, we prove that the solution strategy on SPL accords with a majorization minimization algorithm implemented on an implicit objective function. Furthermore, we found that the loss function contained in this implicit objective has a similar configuration with the non-convex regularized penalty (NCRP) known in statistics and machine learning. Such connection inspires us to discover more intrinsic relationships between the SPL regimes and the NCRP forms, like smoothly clipped absolute deviation (SCAD), logarithmic penalty (LOG) and non-convex exponential penalty (EXP). The insight of the robustness under SPL can then be finely explained. We also analyze the capability of SPL regarding its easy loss-prior-embedding property, and provide an insightful interpretation of the effectiveness mechanism under current SPL variations. Moreover, we design a group-partial-order loss prior, which is especially useful for weakly labeled large-scale data processing tasks. By applying SPL with this loss prior to the FCVID dataset, which is currently one of the largest manually annotated video dataset, our method achieves state-of-the-art performance above existing methods, which further supports the proposed theoretical arguments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call