Abstract
We study the problem of dynamic batch learning in high-dimensional sparse linear contextual bandits, where a decision maker, under a given maximum-number-of-batch constraint and only able to observe rewards at the end of each batch, can dynamically decide how many individuals to include in the next batch (at the end of the current batch) and what personalized action-selection scheme to adopt within each batch. Such batch constraints are ubiquitous in a variety of practical contexts, including personalized product offerings in marketing and medical treatment selection in clinical trials. We characterize the fundamental learning limit in this problem via a regret lower bound and provide a matching upper bound (up to log factors), thus prescribing an optimal scheme for this problem. To the best of our knowledge, our work provides the first inroad into a theoretical understanding of dynamic batch learning in high-dimensional sparse linear contextual bandits. Notably, even a special case of our result—when no batch constraint is present—yields that the simple exploration-free algorithm using the LASSO estimator already achieves the minimax optimal [Formula: see text] regret bound (s0 is the sparsity parameter or an upper bound thereof and T is the learning horizon) for standard online learning in high-dimensional linear contextual bandits (for the no-margin case), a result that appears unknown in the emerging literature of high-dimensional contextual bandits. This paper was accepted by Baris Ata, stochastic models and simulation. Funding: This work is supported by the National Science Foundation [Grant CCF-2106508]. Z. Zhou gratefully acknowledges the Digital Twin research grant from Bain & Company and the New York University’s 2022-2023 Center for Global Economy and Business faculty research grant for support on this work. Z. Ren was supported by the National Science Foundation [Grant OAC 1934578] and by the Discovery Innovation Fund for Biomedical Data Sciences.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.