Abstract

AbstractAs the capabilities of cyber adversaries continue to evolve, now in parallel to the explosion of maturing and publicly-available artificial intelligence (AI) technologies, cyber defenders may reasonably wonder when cyber adversaries will begin to also field these AI technologies. In this regard, some promising (read: scary) areas of AI for cyber attack capabilities are search, automated planning, and reinforcement learning. As such, one possible defensive mechanism against future AI-enabled adversaries is that of cyber deception. To that end, in this work, we present and evaluate Mirage, an experimentation system demonstrated in both emulation and simulation forms that allows for the implementation and testing of novel cyber deceptions designed to counter cyber adversaries that use AI search and planning capabilities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.