Stochastic gradient descent algorithm (SGD) has been popular in various fields of artificial intelligence as well as a prototype of online learning algorithms. This article proposes a novel and general framework of one-sided testing for streaming data based on SGD, which determines whether the unknown parameter is greater than a certain positive constant. We construct the online-updated test statistic sequentially by integrating the selected batch-specific estimator or its opposite, which is referred to opposite online learning. The batch-specific online estimators are chosen strategically according to the proposed sequential tactics designed by two-armed bandit process. Theoretical results prove the advantage of the strategy ensuring the distribution of test statistic to be optimal under the null hypothesis and also supply the theoretical evidence of power enhancement compared with classical test statistic. In application, the proposed method is appealing for statistical inference of one-sided testing because it is scalable for any model. Finally, the superior finite-sample performance is evaluated by simulation studies.
Read full abstract