Abstract

The sample complexity of learning Myerson's optimal auction from i.i.d. samples of bidders' values has received much attention since its introduction by Cole and Roughgarden (STOC 2014). This letter gives a brief introduction of a recent work that settles the sample complexity by showing matching upper and lower bounds, up to a poly-logarithmic factor, for all families of value distributions that have been considered in the literature. The upper bounds are unified under a novel framework, which builds on the strong revenue monotonicity by Devanur, Huang, and Psomas (STOC 2016), and an information theoretic argument. This is fundamentally different from the previous approaches that rely on either constructing an ∈-net of the mechanism space, either explicitly, or implicitly via statistical learning theory, or learning an approximately accurate version of the virtual values. To our knowledge, it is the first time information theoretical arguments are used to show sample complexity upper bounds, instead of lower bounds. The lower bounds are also unified under a meta construction of hard instances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call