Abstract

Cyber threat intelligence (CTI) sharing is the collaborative effort of sharing information about cyber attacks to help organizations gain a better understanding of threats and proactively defend their systems and networks from cyber attacks. The challenge that we address is the fact that traditional indicators of compromise (IoC) may not always capture the breath or essence of a cyber security threat or attack campaign, possibly leading to false alert fatigue and missed detections with security analysts. To tackle this concern, we designed and evaluated a CTI solution that complements the attribute and tagging based sharing of indicators of compromise with machine learning (ML) models for collaborative threat detection. We implemented our solution on top of MISP, TheHive, and Cortex—three state-of-practice open source CTI sharing and incident response platforms—to incrementally improve the accuracy of these ML models, i.e., reduce the false positives and false negatives with shared counter-evidence, as well as ascertain the robustness of these models against ML attacks. However, the ML models can be attacked as well by adversaries that aim to evade detection. To protect the models and to maintain confidentiality and trust in the shared threat intelligence, we extend our previous research to offer fine-grained access to CP-ABE encrypted machine learning models and related artifacts to authorized parties. Our evaluation demonstrates the practical feasibility of the ML model based threat intelligence sharing, including the ability of accounting for indicators of adversarial ML threats.

Highlights

  • Organizations are facing an accelerating wave of sophisticated attacks by cyber criminals that disrupt services, exfiltrate sensitive data, or abuse victim machines and networks for performing other malicious activities

  • Artificial intelligence (AI) methods are employed to efficiently and effectively detect and recognize threats based on low-level data and shared indicators of compromise (IoC), and to augment that information into enriched security incident knowledge with a higher level of abstraction that is more suitable for human consumption, and to triage the high volume of incident reports by filtering irrelevant ones to mitigate alert fatigue [5,6] with security analysts

  • We proposed a solution to complement the sharing of attribute-based IoC with machine learning based threat detection, as the former have been shown to lose value over time

Read more

Summary

Introduction

Organizations are facing an accelerating wave of sophisticated attacks by cyber criminals that disrupt services, exfiltrate sensitive data, or abuse victim machines and networks for performing other malicious activities. These malicious adversaries aim to steal, destroy, or compromise business assets with a particular financial, reputational, or intellectual value. To address such security challenges and eliminate security blind spots for their systems and networks, on-premise services, and cloud environments, organizations are complementing their perimeter defenses with threat intelligence platforms. Artificial intelligence (AI) methods are employed to efficiently and effectively detect and recognize threats based on low-level data and shared indicators of compromise (IoC), and to augment that information into enriched security incident knowledge with a higher level of abstraction that is more suitable for human consumption, and to triage the high volume of incident reports by filtering irrelevant ones to mitigate alert fatigue [5,6] with security analysts

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call