Abstract

Packet classification is crucial to the implementation of several advanced services that require the capability to distinguish traffic in different flows, such as firewalls, intrusion detection systems, and many QoS implementations. Although hardware solutions, such as TCAMs, provide high search speed, they do not scale to large rulesets. Instead, some of the most promising algorithmic research embraces the practice of leveraging the data redundancy in real-life rulesets to improve high performance packet classification. In this paper, we provide a general framework for discerning relationships and distinctions of the design-space of existing packet classification algorithms. Several best-known algorithms, such as RFC and HiCuts/HyperCuts, are carefully analyzed based on this framework, and an improved scheme for each algorithm is proposed. All algorithms studied in this paper, along with their variations, are objectively assessed using both real-life and synthetic rulesets. The source codes of these algorithms are made publicly available on Web-site

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call