Abstract

In the actual operation of machine learning models, differences between training and operational data distributions are a problem referred to as concept drifts. Most existing detection approaches assume that labels can be made readily available, but it is unrealistic to acquire all labels instantaneously. As concept drift is often unpredictable, the deployed machine learning model is not always amenable to concept drift detection. In this paper, we propose the Born-Again Decision Boundary algorithm, which is a novel unsupervised concept drift detection method for inspecting a deployed black box model without labels. The proposed method builds an inspector model that recreates the decision boundaries from the deployed model using black-box rule extraction techniques. This model calculates the distance from the decision boundary to the data point and monitors a region of uncertainty in the input space. Our method removes the dependency on machine learning algorithms, which has hindered the existing methods in practical applications and can be widely applied regardless of the classification algorithm. Experimental results on synthetic and real-world data show that the proposed method is capable of detecting concept drift on unlabeled data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call