Stochastic gradient descent (SGD) has proven effective in solving many inventory control problems with demand learning. However, it often faces the pitfall of an infeasible target inventory level that is lower than the current inventory level. Several recent works have been successful in resolving this issue in various inventory systems. However, their techniques are rather sophisticated and difficult to apply to more complicated scenarios, such as multiproduct and multiconstraint inventory systems. In this paper, we address the infeasible target inventory-level issue from a new technical perspective; we propose a novel minibatch SGD-based metapolicy. Our metapolicy is flexible enough to be applied to a general inventory systems framework covering a wide range of inventory management problems with myopic clairvoyant optimal policy. By devising the optimal minibatch scheme, our metapolicy achieves a regret bound of [Formula: see text] for the general convex case and [Formula: see text] for the strongly convex case. To demonstrate the power and flexibility of our metapolicy, we apply it to three important inventory control problems, multiproduct and multiconstraint systems, multiechelon serial systems, and one-warehouse and multistore systems, by carefully designing application-specific subroutines. We also conduct extensive numerical experiments to demonstrate that our metapolicy enjoys competitive regret performance, high computational efficiency, and low variances among a wide range of applications. This paper was accepted by J. George Shanthikumar, data science. Funding: J. Xie and S. Yuan are supported by the National Natural Science Foundation of China (NSFC) [Grant 72331011]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/mnsc.2023.00920 .