Abstract

The rise in frequency and consequence of cybercrimes enabled by artificial intelligence (AI) has been a cause of concern for decades. At the same time, we've seen the development of defensive capabilities. This article examines the mechanics of AI-enabled attacks. These include voice mimicking used for crime, and natural processing algorithms absorbing harmful and offensive human text patterns to create problematic virtual situations. It also looks at shadow models – evasion, infiltration and manipulation of machine-learning models through shadow modelling techniques are on the rise due to their straightforward development methods, allowing the identification of shortcomings in input features, which can cause misclassification by the model. With a special focus on spam filters, their structure and evasion techniques, we look at the ways in which artificial intelligence is being utilised to cause harm, concluding with a final analysis of the Proofpoint evasion case.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call