Machine learning (ML) techniques play a crucial role in producing precise predictions without the use of explicit programming by utilizing representative and unbiased data. These methods, which are a subset of artificial intelligence (AI), are used in a variety of settings, including recommendation engines, spam filtering, malware detection, classification, and predictive maintenance. While ML algorithms improve results, they also present security and privacy threats, especially in the face of adversarial ML attacks such as data poisoning assaults that can undermine data modeling applications. This study introduces SecK2, a cutting-edge ML method developed to stop dangerous input from entering ML models. The scalability of SecK2 is proved through meticulous experimental research, revealing its astonishing capacity to identify data poisoning attacks at a previously unheard-of pace. As a result, SecK2 becomes a valuable tool for guaranteeing the reliability and security of ML models. Our suggested method produces outstanding results by a variety of criteria. Notably, it achieves a noteworthy 61% convergence rate and an exceptional 89% attack detection rate. Additionally, it offers a phenomenal 96% throughput while protecting data integrity at 53%. The technique also boasts impressive Validation accuracy of 96% and Training accuracy of 92%. The suggested technology offers a strong and reliable barrier against the rising danger of data poisoning attacks. ML practitioners can have more faith in their models, thanks to SecK2’s capabilities, protecting against potential adversarial assaults and preserving the dependability of ML-based applications.
Read full abstract