Abstract

The learning problems are always affected with a certain amount of risk. This risk is measured empirically through various risk functions. The risk functional’s empirical estimates consist of an average over data points’ tuples. With this motivation in this work, the prima face is towards presenting any stochastic approximation method for solving problems involving minimization of risk. Considering huge datasets scenario, gradient estimates are achieved through taking samples of data points’ tuples with replacement. Based on this, a mathematical proposition is presented here which account towards considerable impact for this strategy on prediction model’s ability of generalization through stochastic gradient descent with momentum. The method reaches optimum trade-off with respect to accuracy and cost. The experimental results on maximization of area under the curve (AUC) and metric learning provides superior support towards this approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call