Abstract

The use of machine learning applications has become very common in recent years. It has an important place in many technological fields such as Cyber Security, Language Processing, Biometry and Task Automation. It is possible to see machine learning applications in many places from personal assistants to autonomous vehicles. However, like any system, machine learning has a number of security risks. In addition to general security vulnerability, there are also vulnerabilities specific to machine learning systems. Attacks targeting these vulnerabilities can be encountered in training or production and cause serious problems on the system. In this study, the split feature model (SFM, Split Feature Model) approach has been developed against Evasion Attacks. In Evasion Attacks, it is tried to evade the machine learning system or the performance of the system is reduced with the adversarial inputs. In SFM, features are splitted and distributed to different models. Thus, the features are prevented from directly affecting the output of the entire model. As a result, it is aimed to prevent manipulation in features from directly affecting the output of the machine learning system. The effect of security attacks on the spam filter applied with the traditional learning model and SFM have been examined. Then, performance of SFM has been compared with the traditional machine learning model under security attacks. Experimental results demonstrate the effectiveness of SFM in terms of maintaining accuracy against adversarial inputs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call