Abstract

Model optimization and statistical inference have played a central role in various applications of computational intelligence, data analytics, and computer vision. Traditional approaches are usually based on model-centric learning. That is, even after model training, it is still required to design proper algorithms and to specify hand-crafted parameters for optimization and inference. Recently, discriminative learning has demonstrated its power for process-centric learning. Taking domain expertise and problem structure into account, problem-specific deep architectures can be formed by unfolding the model inference as an iterative process, and the parameters of the optimization process can then be learned from training data. These solutions are closely related with bilevel optimization, partial differential equation (PDE), as well as meta learning, and can provide new insights into the studies of versatile statistical and optimization models, such as sparse representation, structured regression, and conditional random fields. Moreover, generic deep network architectures are often referred to as “black-box” methods, while discriminative process-centric learning can provide a new perspective for the understanding and development of generic deep architectures. To sum up, connecting discriminative learning with model optimization and inference is not only helpful in analyzing convergence and generalization of deep architectures but also offers new perspectives for understanding and developing generic deep learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call