Abstract

Dynamic Information Flow Tracking (DIFT) has been proposed to detect stealthy and persistent cyber attacks in a computer system that evade existing defense mechanisms such as firewalls and signature-based antivirus systems. A DIFTbased defense tracks the propagation of suspicious information flows across the system and dynamically generates security analysis to identify possible attacks, at the cost of additional performance and memory overhead for analyzing non-adversarial information flows. In this paper, we model the interaction between adversarial information flows and DIFT on a partially known system as a nonzero-sum stochastic game. Our game model captures the probability that the adversary evades detection even when it is analyzed using the security policies (false-negatives) and the performance overhead incurred by the defender for analyzing the non-adversarial flows in the system. We prove the existence of a Nash equilibrium (NE) and propose a supervised learning-based approach to find an approximate NE. Our approach is based on a partially input convex neural network that learns a mapping between the strategies and payoffs of the players with the available system knowledge, and an alternating optimization technique that updates the players’ strategies to obtain an approximate equilibrium. We evaluate the performance of the proposed approach and empirically show the convergence to an approximate NE for synthetic random generated graphs and real-world dataset collected using Refinable Attack INvestigation (RAIN) framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call