Abstract

While the convergence rate of any optimization algorithm can be increased by selecting a larger stepsize parameter, this also magnifies the parameter estimation variance. This research introduces an analytical methodology for comparing the estimation variance when optimizing Soft-Max models using two different loss functions (Cross Entropy and Mean Square Error). This method can be used to adjust the stepsize parameters of algorithms with different loss functions so that they attain equal estimation variances after converging; this provides a fair starting point for a comparison of the convergence rates of the algorithms. The polyphonic pitch detection problem is used as a case study, and the performance of the models trained by the cross entropy loss function and the mean square error loss function are compared with each other and with a $k$ -nearest neighbor classifier.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call