Abstract

We investigate the problem of sequential linear data prediction for real life big data applications. The second order algorithms, i.e., Newton-Raphson Methods, asymptotically achieve the performance of the “best” possible linear data predictor much faster compared to the first order algorithms, e.g., Online Gradient Descent. However, implementation of these second order methods results in a computational complexity in the order of $O(M^2)$ for an $M$ dimensional feature vector, where the first order methods offer complexity in the order of $O(M)$ . Because of this extremely high computational need, their usage in real life big data applications is prohibited. To this end, in order to enjoy the outstanding performance of the second order methods, we introduce a highly efficient implementation where the computational complexity of these methods is reduced from $O(M^2)$ to $O(M)$ . The presented algorithm provides the well-known merits of the second order methods while offering a computational complexity similar to the first order methods. We do not rely on any statistical assumptions, hence, both regular and fast implementations achieve the same performance in terms of mean square error. We demonstrate the efficiency of our algorithm on several sequential big datasets. We also illustrate the numerical stability of the presented algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.