Abstract

Although decision trees have been widely applied to different security related applications, their security has not been investigated extensively in an adversarial environment. This work aims to study the robustness of classical decision tree (DT) and Fuzzy decision tree (FDT) under evasion attack that manipulate the features in order to mislead the decision of a classifier. To the best of our knowledge, existing attack methods cannot be applied to DT due to non-differentiation of its decision function. This is the first attack model designed for both DT and FDT. Our model quantifies the influence of changing a feature on the decision. The effectiveness of our method is compared with Papernot (PPNT) and Robustness Verification of Tree-based Models (RVTM), which are state-of-the-art attack methods for DT, and the attack methods employing surrogate and Generative Adversarial Network (GAN) methods. The experimental results suggest that the fuzzifying process increases the robustness of DT. Moreover, FDT with more membership functions is more vulnerable since a smaller number of features is usually used. This study fills the gap of examining the security issue of fuzzy systems in an adversarial environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.