Abstract

Many AI/Machine learning problems require adaptively selecting a sequence of items, each selected item might provide some feedback that is valuable for making better selections in the future, with the goal of maximizing an adaptive submodular function. Most of existing studies in this field focus on either monotone case or non-monotone case. Specifically, if the utility function is monotone and adaptive submodular, Golovin and Krause (J Artif Intell Res 42:427–486, 2011) developed \((1-1/e)\) approximation solution subject to a cardinality constraint. For the cardinality-constrained non-monotone case, Tang (Theor Comput Sci 850:249–261, 2021) showed that a random greedy policy attains an approximation ratio of 1/e. In this work, we generalize the above mentioned results by studying the partial-monotone adaptive submodular maximization problem. To this end, we introduce the notation of adaptive monotonicity ratio \(m\in [0,1]\) to measure the degree of monotonicity of a function. Our main result is to show that for the case of cardinality constraints, if the utility function has an adaptive monotonicity ratio of m and it is adaptive submodular, then a random greedy policy attains an approximation ratio of \(m(1-1/e)+(1-m)(1/e)\). Notably this result recovers the aforementioned \((1-1/e)\) and 1/e approximation ratios when \(m = 1\) and \(m = 0\), respectively. We further extend our results to consider a knapsack constraint and develop a \((m+1)/10\) approximation solution for this general case. One important implication of our results is that even for a non-monotone utility function, we still can attain an approximation ratio close to \((1-1/e)\) if this function is “close” to a monotone function. This leads to improved performance bounds for many machine learning applications whose utility functions are almost adaptive monotone.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call