Abstract

Many host-based anomaly detection techniques have been proposed to detect code-injection attacks on servers. The vast majority, however, are susceptible to attacks in which the injected code masquerades as the original server software, including returning the correct service responses, while conducting its attack. Behavioral distance, by which two diverse replicas processing the same inputs are continually monitored to detect divergence in their low-level (system-call) behaviors and hence potentially the compromise of one of them, has been proposed for detecting mimicry attacks. In this paper, we present a novel approach to behavioral distance measurement using a new type of hidden Markov model, and present an architecture realizing this new approach. We evaluate the detection capability of this approach using synthetic workloads and recorded workloads of production Web and game servers, and show that it detects intrusions with substantially greater accuracy than a prior proposal on measuring behavioral distance. We also detail the design and implementation of a new architecture, which takes advantage of virtualization to measure behavioral distance. We apply our architecture to implement intrusion-tolerant Web and game servers, and through trace-driven simulations demonstrate that it experiences moderate performance costs even when thresholds are set to detect stealthy mimicry attacks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.