Abstract

Reproducing and learning from failures in deployed software is costly and difficult. Those activities can be facilitated, however, if the circumstances leading to a failure can be recognized and properly captured. To anticipate failures we propose to monitor system field behavior for simple trace instances that deviate from a baseline behavior experienced in-house. In this work, we empirically investigate the effectiveness of various simple anomaly detection schemes to identify the conditions that precede failures in deployed software. The results of our experiment provide a preliminary assessment of these schemes, and expose the tradeoffs between different anomaly detection algorithms applied to several types of observable attributes under varying levels of in-house testing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call