Abstract

Machine learning is a technology with the potential to enrich our lives in many ways. It is expected to be used in various situations. However, the value of attacks on machine learning models is also increasing. Therefore, it is considered to be dangerous to use machine learning without proper planning. Poisoning attacks are one of the attacks that can be launched against machine learning models. Poisoning attacks reduce the accuracy of machine learning models by mixing training data with data created with malicious intent to attack the models. Depending on the scenario, the damage caused by poisoning attacks may lead to large-scale accidents. In this study, we propose a method to protect machine learning models from poisoning attacks. In this paper, we assume an environment in which data obtained from multiple sources is used as training data for machine learning models and present a method suitable for defending against poisoning attacks in such an environment. The proposed method computes the influence of the data obtained from each source on the accuracy of the machine learning model to understand how good each source is. The impact of replacing the data from each source with poisonous data is also calculated. Based on the results of these calculations, the proposed method determines the data removal rate for each data source, which represents the confidence level for determining the degree of harmfulness of the data. The proposed method prevents poisonous data from being mixed with the normal data by removing it according to the removal rate. To evaluate the performance of the proposed method, we compared existing methods with the proposed method based on the accuracy of the model after applying the proposed defensive measure. In this experiment, under the condition that the training data contains 17% of poisonous data, the accuracy of the defended model of the proposed method is 89%, which is higher than 83% obtained using the existing method. This shows that the proposed method improved the performance of the model against poisoning attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call