Abstract

The problem of inferring a finite binary sequence w*∈{−1, 1}n is considered. It is supposed that at epochs t=1, 2,…, the learner is provided with random half-space data in the form of finite binary sequences u(t)∈{−1, 1}n which have positive inner-product with w*. The goal of the learner is to determine the underlying sequence w* in an efficient, on-line fashion from the data {u(t), t≥1}. In this context, it is shown that the randomized, on-line directed drift algorithm produces a sequence of hypotheses {w(t)∈{−1, 1}n, t≥1} which converges to w* in finite time with probability 1. It is shown that while the algorithm has a minimal space complexity of 2n bits of scratch memory, it has exponential time complexity with an expected mistake bound of order Ω(e0.139n). Batch incarnations of the algorithm are introduced which allow for massive improvements in running time with a relatively small cost in space (batch size). In particular, using a batch of 𝒪(n log n) examples at each update epoch reduces the expected mistake bound of the (batch) algorithm to 𝒪(n) (in an asynchronous bit update mode) and 𝒪(1) (in a synchronous bit update mode). The problem considered here is related to binary integer programming and to learning in a mathematical model of a neuron. ©1999 John Wiley & Sons, Inc. Random Struct. Alg., 14, 345–381 (1999)

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call