Abstract

We consider packing linear programs with m rows where all constraint coefficients are normalized to be in the unit interval. The n columns arrive in random order and the goal is to set the corresponding decision variables irrevocably when they arrive to obtain a feasible solution maximizing the expected reward. Previous (1 − ϵ)-competitive algorithms require the right-hand side of the linear program to be Ω((m/ϵ2)log(n/ϵ)), a bound that worsens with the number of columns and rows. However, the dependence on the number of columns is not required in the single-row case, and known lower bounds for the general case are also independent of n. Our goal is to understand whether the dependence on n is required in the multirow case, making it fundamentally harder than the single-row version. We refute this by exhibiting an algorithm that is (1 − ϵ)-competitive as long as the right-hand sides are Ω((m2/ϵ2)log(m/ϵ)). Our techniques refine previous probably approximately correct learning based approaches that interpret the online decisions as linear classifications of the columns based on sampled dual prices. The key ingredient of our improvement comes from a nonstandard covering argument together with the realization that only when the columns of the linear program belong to few one-dimensional subspaces we can obtain such small covers; bounding the size of the cover constructed also relies on the geometry of linear classifiers. General packing linear programs are handled by perturbing the input columns, which can be seen as making the learning problem more robust.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call