Machine learning (ML) systems that utilize sensor inputs have become integral in various applications; however, they remain vulnerable to sensor-based adversarial example (AE) attacks, where compromised sensors can be exploited to manipulate system outputs. This study addresses the critical issue of safeguarding these systems by identifying and mitigating compromised sensors, thereby enhancing their resilience. This study introduces a novel detection method using a feature-removable model (FRM), which allows the selective removal of features to identify inconsistencies in the model's outputs when different sensor features are altered. The methodology was validated on a human activity recognition (HAR) model utilizing sensors placed on the chest, wrist, and ankle, with a focus on identifying attacker-compromised sensors. The results demonstrated the efficacy of the method, achieving an average Recall of detected sensors of 0.92 and an average Precision of detected sensors of 0.72, highlighting the ability of the approach to accurately detect and identify compromised sensors. This study significantly contributes to advancing the security and robustness of ML systems against sensor-based AE attacks.
Read full abstract