Abstract

Time-series anomaly detection receives increasing research interest given the growing number of data-rich application domains. Recent additions to anomaly detection methods in research literature include deep neural networks (DNNs: e.g., RNN, CNN, and Autoencoder). The nature and performance of these algorithms in sequence analysis enable them to learn hierarchical discriminative features and time-series temporal nature. However, their performance is affected by usually assuming a Gaussian distribution on the prediction error, which is either ranked, or threshold to label data instances as anomalous or not. An exact parametric distribution is often not directly relevant in many applications though. This will potentially produce faulty decisions from false anomaly predictions due to high variations in data interpretation. The expectations are to produce outputs characterized by a level of confidence. Thus, implementations need the Prediction Interval (PI) that quantify the level of uncertainty associated with the DNN point forecasts, which helps in making better-informed decision and mitigates against false anomaly alerts. An effort has been made in reducing false anomaly alerts through the use of quantile regression for identification of anomalies, but it is limited to the use of quantile interval to identify uncertainties in the data. In this paper, an improve time-series anomaly detection method called deep quantile regression anomaly detection (DQR-AD) is proposed. The proposed method go further to used quantile interval (QI) as anomaly score and compare it with threshold to identify anomalous points in time-series data. The tests run of the proposed method on publicly available anomaly benchmark datasets demonstrate its effective performance over other methods that assumed Gaussian distribution on the prediction or reconstruction cost for detection of anomalies. This shows that our method is potentially less sensitive to data distribution than existing approaches.

Highlights

  • Fast advances in Industry 4.0 technologies generate enormous amount of data from large number of sensors [1] and other devices within an increasing number of industrial applications

  • To label the time-series and ease the computation of evaluation metric for our algorithm, we used the same procedure as in [38] where the valued of the random features of a data point are swapped

  • This paper presents a deep learning-based anomaly detection method for detection and classification of anomalies in time-series data

Read more

Summary

Introduction

Fast advances in Industry 4.0 technologies generate enormous amount of data from large number of sensors [1] and other devices within an increasing number of industrial applications. Industries often collect and exploit such data for a number of critical applications including anomaly detection. Is defined as unexpected point in time-series (e.g., a sudden sensor drift), or an anomalous subsequence within the time-series (e.g., a continuous change in the sensor readings), or points that are anomalous based on defined context, or an anomalous time-series within the entire time-series database [2]. Detection methods are needed for early fault detection, with potential contributions to avoid total system failure. This includes providing an early evidence for detection of mechanical faults [3] and sensor faults [4] in automotive vehicles during usage

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call