Abstract

Cyber-attacks, an intentional effort to steal information or interrupt the network, are growing dramatically. It is of great importance to understand how an adversary’s behavior might impact the detection of threats. Prior research in adversarial cybersecurity has investigated the effect of different honeypot variations on adversarial decisions in a deception-based game experimentally. However, it is unknown how different honeypot variation affects adversarial decisions using cognitive models. The primary objective of this research is to develop the cognitive model using Instance-based learning theory (IBLT) to make predictions for decisions for networks with different honeypot proportions. The experimental study involved the use of a deception game (DG): small, medium, and large. The DG is defined as DG (n, k, γ), where n is the number of servers, k is the number of honeypots, and γ is the number of probes that the opponent makes before attacking the network. The DG had three between-subject conditions, which denoted three different honeypot proportions. Human data in the experimental study was collected by recruiting 60 participants who were randomly assigned one of the three between-subject conditions of the deception game (N = 20 per condition). The results revealed with an increase in the proportion of honeypots, the honeypot and no-attack actions increased significantly. Next, we built two Instance-based Learning (IBL) models, an IBL model with calibrated parameters (IBL-calibrated) and an IBL model with ACT-R parameters (IBL-ACT-R), to account for human decisions in conditions involving different honeypot proportions in a deception-based security game. It was found that both IBL-calibrated and IBL-ACT-R models were able to account for human behavior across different experimental conditions. In addition, results revealed a greater reliance on the recent and frequent occurrence of events among the human participants. We highlight the key importance of our research for the field of cognitive modelling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call