Abstract

Masking strategies for cyberdefense (i.e., disguising network attributes to hide the real state of the network) are predicted to be effective in simulated experiments. However, it is unclear how effective they are against human attackers. We address three factors that challenge the effectiveness of the masking strategies in practice: (1) we relax the assumption of rationality of the attackers made by Game Theory/Machine Learning defense algorithms; (2) we provide a cognitive model of human attackers that can inform these defense algorithms; and (3) we provide a way to generate data on attacker’s decisions through simulation with a cognitive model. Two masking strategies of defense were generated using Game Theory and Machine Learning (ML) algorithms. The effectiveness of these two masking strategies of defense, risk averse and rational, are compared in an experiment with human attackers. We collected attacker’s decisions against the two masking strategies. With the limited human participant’s data, the results indicate that the risk averse strategy can reduce the defense losses compared to the rational masking strategy. We also propose a cognitive model based on Instance-Based Learning Theory that accurately represents and predicts the attacker’s decisions in this task. We demonstrate the model’s process by generating simulated data and comparing it to the attacker’s actual actions in the experiment. The model is able to capture the data at the aggregate and at the individual levels of attackers making decisions in both rational and risk averse defense algorithms. We propose that this model can be used to inform game theoretic defense algorithms and to produce synthetic data that can be used by ML algorithms to generate new defense strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call