Abstract

Deep Neural Networks (DNN) can be effectively used to accurately identify infectious pathogens. Unfortunately, DNNs can be exploited by bioterrorists, using adversarial attacks, to stage a fake super-bug outbreak or to hide the extent of a super-bug outbreak. In this work, we show how a DNN that performs superb classification o f c gMLST p rofiles ca n be exploited using adversarial attacks. To this end, we train a novel DNN model, Methicillin Resistance Classification Network (MRCN), which identifies s trains o f t he S taph b acteria t hat are resistant to an antibiotic named methicillin with 93.8 percent accuracy, using Core Genome Multi-Locus Sequence Typing (cgMLST) profiles. To defend a gainst this kind of exploitation, we train a second DNN model, Synthetic Profile Classifier (SPC), which can differentiate between natural Staph bacteria and generic synthetic Staph bacteria with 92.3 percent accuracy. Our experiments show that the MRCN model is highly susceptible to multiple adversarial attacks and that the defenses we propose are not able to provide effective protection against them. As a result, a bioterrorist would be able to utilize the compromised DNN model to inflict immense damage by s taging a fake epidemic or delaying the detection of an epidemic, allowing it to proliferate undeterred.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call