Abstract
Outlier detection is critical for ensuring data integrity across various domains, from fraud detection in finance to anomaly identification in healthcare. Despite the importance of anomaly detection, most methods focus on performance, with interpretability remaining underexplored in unsupervised learning. Interpretability is essential in contexts where understanding why certain data points are classified as outliers is as important as the detection itself.This study introduces an interpretable approach to unsupervised outlier detection by combining normalizing flows and decision trees. Normalizing flows transform complex data distributions into simpler, tractable forms, allowing precise density estimation and the generation of pseudo-labels that differentiate inliers from outliers. These pseudo-labels are subsequently used to train a decision tree, offering both a structured decision-making process and interpretability in an unsupervised context, thereby addressing a key gap in the field.Our method was evaluated against 23 established outlier detection algorithms across 17 datasets using Precision, Recall, F1 Score, and Matthews Correlation Coefficient (MCC). The results showed that our approach ranked 4th in F1 Score, 6th in MCC, 3rd in Precision, and 19th in Recall. While it performed strongly on some datasets and less so on others, this variability is likely due to dataset-specific characteristics. Post-hoc statistical significance testing demonstrated that interpretability in unsupervised outlier detection can be achieved without significantly compromising performance, making it a valuable option for applications that require transparent and understandable anomaly detection.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have