Abstract

Generating automated cyber resilience policies for real-world settings is a challenging research problem that must account for uncertainties in system state over time and dynamics between attackers and defenders. In addition to understanding attacker and defender motives and tools, and identifying “relevant” system and attack data, it is also critical to develop rigorous mathematical formulations representing the defender's decision-support problem under uncertainty. Game-theoretic approaches involving cyber resource allocation optimization with Markov decision processes (MDP) have been previously proposed in the literature. However, as is the case in strategic card games such as poker, research challenges using game-theoretic approaches for practical cyber defense applications include equilibrium solvability, existence, and possible multiplicity. Moreover, mixed uncertainties associated with player payoffs also need to be accounted for within game settings. This paper proposes an agent-centric approach for cybersecurity decision-support with partial system state observability. Multiple partially observable MDP (POMDP) problems are formulated and solved from a cyber defender's perspective, against a fixed attacker type, using synthetic (notional) system and attack parameters estimated from a Monte Carlo based sampling scheme. The agent-centric problem formulation helps address equilibrium related research challenges and represents a step toward automated and dynamic cyber resilience policy generation and implementation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call