Industrial Internet of Things (IIoT) networks (e.g., a smart grid industrial control system) are increasingly on the rise, especially in smart cities around the globe. They contribute to meeting the day-to-day needs (e.g., power, water, manufacturing, transportation) of the civilian society, alongside making societal businesses more efficient, productive, and profitable. However, it is also well known that IoT devices often operate on poorly configured security settings. This increases the chances of occurrence of (nation-sponsored) stealthy spread-based APT malware attacks in IIoT networks that might go undetected over a considerable period of time. Such attacks usually generate a negative first-party QoS impact with financial consequences for companies owning such IIoT network infrastructures. This impact spans (i.e., aggregates) space (i.e., the entire IIoT network or a sub-network) and time (i.e., duration of business disruption), and is a measure of significant interest to managers running their businesses atop such networks. It is of little use to network resilience boosting managers if they have to wait for a cyber-attack to happen to gauge this impact. Consequently, one of the questions that intrigues us is: can managers estimate this first-party impact prior to APT cyber-attack(s) causing financial damage to companies? In this paper, we propose the first computationally efficient and quantitative network theory framework to (a) characterize this first-party impact apriori as a statistical distribution over multiple attack configurations in a family of malware-driven APT cyber-attacks specifically launched on businesses running atop IIoT networks, (b) accurately compute the statistical moments (e.g., mean) of the resulting impact distribution, and (c) tightly bound the accuracy of worst-case risk estimate of such a distribution - captured through the tail of the distribution, using the Conditional Value at Risk (CVaR) metric. In relation to (a) above, our methodology extends the seminal Factor Analysis of Information Risk (FAIR) cyber-risk quantification methodology that does not explicitly account for network interconnections among system-risk contributing variables. We validate the effectiveness of our theory using trace-driven Monte Carlo simulations based upon test-bed experiments conducted in the FIT IoT-Lab. We further illustrate quantitatively that even if spread-based APT cyber-attacks induce a statistically light-tailed first-party cyber-loss distribution on an IIoT networked enterprise in the worst case, the aggregate multi-party cyber-risk distribution incurred by the same enterprise in supply-chain ecosystems could be heavy-tailed. This will pose significant market scale-up challenges to cyber-security improving commercial cyber (re-)insurance businesses. We subsequently propose managerial action items to mitigate the first-party cyber-risk exposure emanating from any given IIoT driven enterprise.
Read full abstract