Abstract
This article focuses on the problem of optimizing the system utility of Machine Learning (ML)-based systems in the presence of ML mispredictions. This is achieved via the use of self-adaptive systems and through the execution of adaptation tactics, such asmodel retraining, which operate at the level of individual ML components.To address this problem, we propose a probabilistic modeling framework that reasons about the cost/benefit tradeoffs associated with adapting ML components. The key idea of the proposed approach is to decouple the problems of estimating (1) the expected performance improvement after adaptation and (2) the impact of ML adaptation on overall system utility.We apply the proposed framework to engineer a self-adaptive ML-based fraud detection system, which we evaluate using a publicly available, real fraud detection dataset. We initially consider a scenario in which information on the model’s quality is immediately available. Next, we relax this assumption by integrating (and extending) state-of-the-art techniques for estimating the model’s quality in the proposed framework. We show that by predicting the system utility stemming from retraining an ML component, the probabilistic model checker can generate adaptation strategies that are significantly closer to the optimal, as compared against baselines such as periodic or reactive retraining.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ACM Transactions on Autonomous and Adaptive Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.