Abstract
We investigate ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an unknown input distribution $\mathcal{D}$. We assume here that $\mathcal{D}$ is of product type. More precisely, suppose that we need to process a sequence $I_1,I_2,\ldots$ of inputs $I=(x_1,x_2,\ldots,x_n)$ of some fixed length n, where each $x_i$ is drawn independently from some arbitrary, unknown distribution $\mathcal{D}_i$. The goal is to design an algorithm for these inputs so that eventually the expected running time will be optimal for the input distribution $\mathcal{D}=\prod_i\mathcal{D}_i$. We give such self-improving algorithms for two problems: (i) sorting a sequence of numbers and (ii) computing the Delaunay triangulation of a planar point set. Both algorithms achieve optimal expected limiting complexity. The algorithms begin with a training phase during which they collect information about the input distribution, followed by a stationary regime in which the algorithms settle to their optimized incarnations.
Submitted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have