Abstract

The deep unsupervised time-series anomaly detector depends on the one-class representation, which is more effective by only formulating the normal samples. However, normal samples are always mixed with anomalies in the unlabeled training dataset. The learned one-class representation may be biased and violates the one-class setting. To address this problem, we refine the one-class representation and propose a unified AMFormer (Active Masked transFormer) framework, which integrates Transformer with the masked operation mechanism and cost-sensitive learning theory. Specifically, we first develop a network-driven masked operation with Hadamard product transformation to damage the initial input samples. The encoder and decoder representations rebuild the processed incomplete samples, which avoids identical shortcuts and further enhances robustness. Secondly, we exploit the active MSE (Mean Squared Error) loss function to purify the training samples. The different weights are dynamically added to different kinds of samples according to their rebuilding-errors-based pseudo labels. The pseudo anomalies with more significant rebuilding errors are removed by putting lower weights. Finally, extensive experiments are conducted on four benchmark datasets. The experimental results demonstrate that our AMFormer outperforms the nine relevant benchmark algorithms, boosting the mean f1-score from 0.851 to 0.937.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call