Abstract

Modern Information and Communication Technology (ICT)-based applications utilize current technological advancements for purposes of streaming data, as a way of adapting to the ever-changing technological landscape. Such efforts require providing accurate, meaningful, and trustworthy output from the streaming sensors particularly during dynamic virtual sensing. However, to ensure that the sensing ecosystem is devoid of any sensor threats or active attacks, it is paramount to implement secure real-time strategies. Fundamentally, real-time detection of adversarial attacks/instances during the User Feedback Process (UFP) is the key to forecasting potential attacks in active learning. Also, according to existing literature, there lacks a comprehensive study that has a focus on adversarial detection from an active machine learning perspective at the time of writing this paper. Therefore, the authors posit the importance of detecting adversarial attacks in active learning strategy. Attack in the context of this paper through a UFP-Threat driven model has been presented as any action that exerts an alteration to the learning system or data. To achieve this, the study employed ambient data collected from a smart environment human activity recognition from (Continuous Ambient Sensors Dataset, CASA) with fully labeled connections, where we intentionally subject the Dataset to wrong labels as a targeted/manipulative attack (by a malevolent labeler) in the UFP, with an assumption that the user-labels were connected to unique identities. While the dataset's focus is to classify tasks and predict activities, our study gives a focus on active adversarial strategies from an information security point of view. Furthermore, the strategies for modeling threats have been presented using the Meta Attack Language (MAL) compiler for purposes adversarial detection. The findings from the experiments conducted have shown that real-time adversarial identification and profiling during the UFP could significantly increase the accuracy during the learning process with a high degree of certainty and paves the way towards an automated adversarial detection and profiling approaches on the Internet of Cognitive Things (ICoT).

Highlights

  • While many Internet-of-Things (IoT) technologies are applying Machine Learning (ML) in implementing security solutions, it has become apparent that most sophisticated attacks are propagated against machine learning-based systems [1].The associate editor coordinating the review of this manuscript and approving it for publication was Mervat Adib Bamiah .most of the IoT infrastructure-based attacks succeed as a result of varying adversary intentions and expectations

  • While existing researches mainly have a focus on how machine learning models can be fooled, the User Feedback Process (UFP)-TM, from an information security stand point assumes based on classification in a setting, and restrictions can prevent the human oracle/agent that poses as an adversary from manipulations

  • PROBLEM FORMULATION Based on the problem statement that has been presented on the need for security techniques during activity recognition, we present a problem formulation that is centered on adversarial detection during an active learning strategy

Read more

Summary

Introduction

While many Internet-of-Things (IoT) technologies are applying Machine Learning (ML) in implementing security solutions, it has become apparent that most sophisticated attacks are propagated against machine learning-based systems [1].The associate editor coordinating the review of this manuscript and approving it for publication was Mervat Adib Bamiah .most of the IoT infrastructure-based attacks succeed as a result of varying adversary intentions and expectations. The UFP-TM has been modeled to assume that an adversary’s attempt are basically aimed to capitalize on the non-robustness of machine learning algorithms through targeted attacks (Logical and physical), leading to the assumption that, there always may exist a vulnerability that can be exploited Based on this premise, we argue from this context that, it is possible for a trained classifier, C to be able to correctly classify an instance x ∈ X , where the actual goal of an adversary is to influence the classifier to classify an instance x ∈ X wrongly based on a vulnerability, as a targeted attack as (x ∈ X ) –> oracle. While existing researches mainly have a focus on how machine learning models can be fooled, the UFP-TM, from an information security stand point assumes based on classification (object, activity) in a setting, and restrictions can prevent the human oracle/agent that poses as an adversary from manipulations

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.